Re: [TYPES] The type/object distinction and possible synthesis of OOP and imperative programming languages
On Thu, Apr 18, 2013 at 11:31 PM, Jason Wilkins jason.a.wilk...@gmail.com wrote: I don't quite think I understand what you are saying. Are you saying that mathematical models are not a good foundation for computer science because computers are really made out of electronic gates? No, I'm really trying to point out that models based on Digital Logic vs. models based on Symbolic Logic are completely different -- they have different basiis. They are both types of Maths, and that you can interchange them as a demonstration doesn't actually help the practical issue of keeping the two domains separate -- they have differing logics. It's like the domain of Natural numbers vs. the Complex, or perhaps the Natural and the Real. Yes you can translate back and forth, but they are for all practical purposes distinct and can't be mixed. All I need to do is show that my model reduces to some basic physical implementation (with perhaps some allowances for infinity) and then I can promptly forget about that messy business and proceed to use my clean mathematical model. If that's all you want to do, you can stick with Boolean Logic. The reason any model of computation exists is that it is easier to think about a problem in some terms than in others. By showing how to transform one model to another you make it possible to choose exactly how you wish to solve a problem. Yes, and I'm attempting to provide an argument that the (historically?) dominant model of symbolic calculus is misinforming the practical domain of working out differences and arguments within my own domain of the programming community. Unfortunately, my inexperience with the literature is actually betraying the validity of my point. The reason we do not work directly in what are called von Neumann machines is that they are not convenient for all kinds of problems. However we can build a compiler to translate anything to anything else so we I don't see why anybody would care. I'm trying to say that *I* care, because I can't seem to find the common ground that affects 1000's of people in the applied C.S. domain with the 1000's of people in the theoretical C.S. domain. MarkJ Tacoma -- http://mail.python.org/mailman/listinfo/python-list
Free book, Hacking Secret Ciphers with Python
I've released my third book, Hacking Secret Ciphers with Python for free under a Creative Commons license. This book is aimed at people who have no experience programming or with cryptography. The book goes through writing Python programs that not only implement several ciphers but also can hack these ciphers. Each chapter presents a new program and explains how the source code works. You can download the book from http://inventwithpython.com/hacking 100% of the proceeds from the book sales will be donated to the Electronic Frontier Foundation, Creative Commons, and The Tor Project. -- http://mail.python.org/mailman/listinfo/python-list
Re: [TYPES] The type/object distinction and possible synthesis of OOP and imperative programming languages
I think there is some misunderstanding here. Being mathematical in academic work is a way of making our ideas rigorous and precise, instead of trying to peddle wooly nonsense. I'm sorry. I am responsible for the misunderstanding. I used the word math when I really mean symbolic logic (which, historically, was part of philosophy). My point is that the field is confusing because it seems to ignore binary logic in favor of symbolic logic. Is binary logic not rigorous and precise enough? -- MarkJ Tacoma, Washington -- http://mail.python.org/mailman/listinfo/python-list
Re: [TYPES] The type/object distinction and possible synthesis of OOP and imperative programming languages
Ned Batchelder於 2013年4月20日星期六UTC+8上午12時41分03秒寫道: On 4/19/2013 12:16 PM, Steven D'Aprano wrote: On Fri, 19 Apr 2013 12:02:00 -0400, Roy Smith wrote: PS: a great C++ interview question is, What's the difference between a class and a struct? Amazing how few self-professed C++ experts have no clue. I'm not a C++ expert, but I am an inquiring mind, and I want to know the answer! The only difference between a class and a struct is that classes default to private access for their members, and structs default to public. --Ned. In python even a class can be decorated. Also features of instances can be added at run time from programs by different programmers or even programs from machines by the code generation scheme used in many CAD tools. Nowadays the concept of a structure is not clear without specifying the language used in programming. A list is a structure of non-homogeneous types of items in LISP, PERL and PYTHON. But the cases are different in C++, PASCAL, ADDA, JAVA -- http://mail.python.org/mailman/listinfo/python-list
Re: Encoding NaN in JSON
On Fri, Apr 19, 2013 at 9:42 PM, Grant Edwards invalid@invalid.invalid wrote: The OP asked for a string, and I thought you were proposing the string 'null'. If one is to use a string, then 'NaN' makes the most sense, since it can be converted back into a floating point NaN object. I infer that you were proposing a JSON null value and not the string 'null'? Not me, Wayne Werner proposed to use the JSON null value. I parsed the backticks (`) used by him as a way to delimit it from text and not as a string. PS. On 2013-04-19, Chris ???Kwpolska??? Warrick kwpol...@gmail.com wrote: Is Unicode support so hard, especially in the 21st century? -- Kwpolska http://kwpolska.tk | GPG KEY: 5EAAEA16 stop html mail| always bottom-post http://asciiribbon.org| http://caliburn.nl/topposting.html -- http://mail.python.org/mailman/listinfo/python-list
Re: Free book, Hacking Secret Ciphers with Python
On 20/04/2013 08:37, asweig...@gmail.com wrote: I've released my third book, Hacking Secret Ciphers with Python for free under a Creative Commons license. This book is aimed at people who have no experience programming or with cryptography. The book goes through writing Python programs that not only implement several ciphers but also can hack these ciphers. Each chapter presents a new program and explains how the source code works. You can download the book from http://inventwithpython.com/hacking 100% of the proceeds from the book sales will be donated to the Electronic Frontier Foundation, Creative Commons, and The Tor Project. Very good book! 1000 Thanks for your unvaluable work. By the way who is Aaron? Karim France -- http://mail.python.org/mailman/listinfo/python-list
Re: Different cache filename
On 04/20/2013 01:37 AM, Fabian PyDEV wrote: Hi, when load a module mymodule.py with importlib.machinery.SourceFileLoader a bytecode file is created as mymodule.cpython-33.pyc. If I load a module mymodule.ext.py the same way the same bytecode file is created as mymodule.cpython-33.pyc. Is there any way I could tell python to generate the bycode file as mymodule.ext.cpython-33.pyc? Regards, Fabian Rename the source file to a legal one, such as: mymodule_ext.py and you won't have the problem. Period is not a valid character within a Python identifier. -- DaveA -- http://mail.python.org/mailman/listinfo/python-list
Re: unzipping a zipx file
19.04.13 20:59, b_erickson1 написав(ла): I have python 2.6.2 and I trying to get it to unzip a file made with winzip pro. The file extension is zipx. This is on a windows machine where I have to send them all that files necessary to run. I am packaging this with py2exe. I can open the file with zFile = zipfile.ZipFile(fullPathName,'r') and I can look through all the file in the archive for filename in zFile.namelist(): but when I write the file out with this code: ozFile = open(filename,'w') ozFile.write(zFile.read(filename)) ozFile.close() that file still looks encrypted. AFAIK some archivers use zipx extension for zip files which contains files compressed with advanced compression methods (bzip2, lzma, etc). Python supports bzip2 and lzma compression in zip files since 3.3. No errors are thrown. Python 2.7 and 3.2 should raise an exception (this bug was fixed several months ago). 2.6 is too old and this fix was not backported to it. What am I missing? Use Python 3.3 or at least 2.7. -- http://mail.python.org/mailman/listinfo/python-list
Re: Ubuntu package python3 does not include tkinter
Am 19.04.2013 19:42, schrieb lcrocker: I understand that for something like a server distribution, but Ubuntu is a user-focused desktop distribution. It has a GUI, always. The purpose of a distro like that is to give users a good experience. If I install Python on Windows, I get to use Python. On Ubuntu, I don't, and I think that will confuse some users. I recently recommended Python to a friend who wants to start learning programming. Hurdles like this don't help someone like him. It's _so_ easy to install an additional package on Ubuntu that that really shouldn't be called a 'hurdle'. Using tkinter or any other GUI toolkit is much more difficult for a beginner. -- http://mail.python.org/mailman/listinfo/python-list
itertools.groupby
I have a file such as: $ cat my_data Starting a new group a b c Starting a new group 1 2 3 4 Starting a new group X Y Z Starting a new group I am wanting a list of lists: ['a', 'b', 'c'] ['1', '2', '3', '4'] ['X', 'Y', 'Z'] [] I wrote this: #!/usr/bin/python3 from itertools import groupby def get_lines_from_file(file_name): with open(file_name) as reader: for line in reader.readlines(): yield(line.strip()) counter = 0 def key_func(x): if x.startswith(Starting a new group): global counter counter += 1 return counter for key, group in groupby(get_lines_from_file(my_data), key_func): print(list(group)[1:]) I get the output I desire, but I'm wondering if there is a solution without the global counter. -- http://mail.python.org/mailman/listinfo/python-list
Is Unicode support so hard...
In a previous post, http://groups.google.com/group/comp.lang.python/browse_thread/thread/6aec70817705c226# , Chris “Kwpolska” Warrick wrote: “Is Unicode support so hard, especially in the 21st century?” -- Unicode is not really complicate and it works very well (more than two decades of development if you take into account iso-14). But, - I can say, as usual - people prefer to spend their time to make a better Unicode than Unicode and it usually fails. Python does not escape to this rule. - I'm busy with TeX (unicode engine variant), fonts and typography. This gives me plenty of ideas to test the flexible string representation (FSR). I should recognize this FSR is failing particulary very well... I can almost say, a delight. jmf Unicode lover -- http://mail.python.org/mailman/listinfo/python-list
Re: Is Unicode support so hard...
On 4/20/2013 1:12 PM, jmfauth wrote: In a previous post, http://groups.google.com/group/comp.lang.python/browse_thread/thread/6aec70817705c226# , Chris “Kwpolska” Warrick wrote: “Is Unicode support so hard, especially in the 21st century?” -- Unicode is not really complicate and it works very well (more than two decades of development if you take into account iso-14). But, - I can say, as usual - people prefer to spend their time to make a better Unicode than Unicode and it usually fails. Python does not escape to this rule. - I'm busy with TeX (unicode engine variant), fonts and typography. This gives me plenty of ideas to test the flexible string representation (FSR). I should recognize this FSR is failing particulary very well... I can almost say, a delight. jmf Unicode lover I'm totally confused about what you are saying. What does make a better Unicode than Unicode mean? Are you saying that Python is guilty of this? In what way? Can you provide specifics? Or are you saying that you like how Python has implemented it? FSR is failing ... a delight? I don't know what you mean. --Ned. -- http://mail.python.org/mailman/listinfo/python-list
Re: itertools.groupby
On 4/20/2013 1:09 PM, Jason Friedman wrote: I have a file such as: $ cat my_data Starting a new group a b c Starting a new group 1 2 3 4 Starting a new group X Y Z Starting a new group I am wanting a list of lists: ['a', 'b', 'c'] ['1', '2', '3', '4'] ['X', 'Y', 'Z'] [] I wrote this: #!/usr/bin/python3 from itertools import groupby def get_lines_from_file(file_name): with open(file_name) as reader: for line in reader.readlines(): yield(line.strip()) counter = 0 def key_func(x): if x.startswith(Starting a new group): global counter counter += 1 return counter for key, group in groupby(get_lines_from_file(my_data), key_func): print(list(group)[1:]) I get the output I desire, but I'm wondering if there is a solution without the global counter. def separate_on(lines, separator): group = None for line in lines: if line.strip() == separator: if group is not None: yield group group = [] else: assert group is not None # Should have gotten a separator first group.append(line) yield group with open(my_data) as my_data: for group in separate_on(my_data, Starting a new group): print group The handling of the first separator line feels delicate to me, but this provides the output you want. --Ned. -- http://mail.python.org/mailman/listinfo/python-list
Re: Is Unicode support so hard...
On Sat, Apr 20, 2013 at 10:22 AM, Ned Batchelder n...@nedbatchelder.com wrote: On 4/20/2013 1:12 PM, jmfauth wrote: In a previous post, http://groups.google.com/group/comp.lang.python/browse_thread/thread/6aec70817705c226# , Chris “Kwpolska” Warrick wrote: “Is Unicode support so hard, especially in the 21st century?” -- Unicode is not really complicate and it works very well (more than two decades of development if you take into account iso-14). But, - I can say, as usual - people prefer to spend their time to make a better Unicode than Unicode and it usually fails. Python does not escape to this rule. - I'm busy with TeX (unicode engine variant), fonts and typography. This gives me plenty of ideas to test the flexible string representation (FSR). I should recognize this FSR is failing particulary very well... I can almost say, a delight. jmf Unicode lover I'm totally confused about what you are saying. What does make a better Unicode than Unicode mean? Are you saying that Python is guilty of this? In what way? Can you provide specifics? Or are you saying that you like how Python has implemented it? FSR is failing ... a delight? I don't know what you mean. --Ned. Don't bother trying to figure this out. jmfauth has been hijacking every thread that mentions Unicode to complain about the flexible string representation introduced in Python 3.3. Apparently, having proper Unicode semantics (indexing is based on characters, not code points) at the expense of performance when calling .replace on the only non-ASCII or BMP character in the string is a horrible bug. -- http://mail.python.org/mailman/listinfo/python-list
Re: Is Unicode support so hard...
On Sun, Apr 21, 2013 at 3:22 AM, Ned Batchelder n...@nedbatchelder.com wrote: I'm totally confused about what you are saying. What does make a better Unicode than Unicode mean? Are you saying that Python is guilty of this? In what way? Can you provide specifics? Or are you saying that you like how Python has implemented it? FSR is failing ... a delight? I don't know what you mean. You're not familiar with jmf? He's one of our resident trolls. Allow me to summarize Python 3's Unicode support... From 3.0 up to and including 3.2.x, Python could be built as either narrow or wide. A wide build consumes four bytes per character in every string, which is rather wasteful (given that very few strings actually NEED that); a narrow build gets some things wrong. (I'm using a 2.7 here as I don't have a narrow-build 3.x handy; the same considerations apply, though.) Python 2.7.4 (default, Apr 6 2013, 19:54:46) [MSC v.1500 32 bit (Intel)] on win32 Type copyright, credits or license() for more information. len(uasdf\U00012345qwer) 10 uasdf\U00012345qwer[8] u'e' In a narrow build, strings are stored in UTF-16, so astral characters count as two. This means that a program will behave unexpectedly differently on different platforms (other languages, such as ECMAScript, actually *mandate* UTF-16; at least this means you can depend on this otherwise-bizarre behaviour regardless of what platform you're on), and I have to say this is counter-intuitive. Enter Python 3.3 and PEP 393 strings. Now *EVERY* Python build is, conceptually, wide. (I'm not sure how PEP 393 applies to other Pythons - Jython, PyPy, etc - so assume that whenever I refer to Python, I'm restricting this to CPython.) The underlying representation might be more efficient, but to the script, it's exactly the same as a wide build. If a string has no characters that demand more width, it'll be stored nice and narrow. (It's the same technique that Pike has been using for a while, so it's a proven system; in any case, we know that this is going to work, it's just a question of performance - it adds a fixed overhead.) Great! We save memory in Python programs. Wonderful! Right? Enter jmf. No, it's not wonderful, because OBVIOUSLY Python is now America-centric, because now the full Unicode range is divided into these ones get stored in 1 byte per char, these in 2, these in 4. Clearly that's making life way worse for everyone else. Also, compared to the narrow build that jmf was previously using, this uses heaps MORE space in the stupid micro-benchmarks that he keeps on trotting out, because he has just one astral character in a sea of ASCII. And that's totally what programs are doing all the time, too. Never mind that basic operations like length, slicing, etc are no longer buggy, no, Python has taken a terrible step backwards here. Oh, and check this out: def munge(s): Move characters around in a string. l=len(s)//4 return s[:l]+s[l*2:l*3]+s[l:l*2]+s[l*3:] munge(asdfqwerzxcv1234) 'asdfzxcvqwer1234' Looks fine. munge(uasd\U00012345we\U00034567xc\U00023456bla) u'asd\U00012167xc\U00023745we\U00034456bla' Where'd those characters come from? I was just moving stuff around, right? I can't get new characters out of it... can I? Flash forward to current date, and jmf has hijacked so many threads to moan about PEP 393 that I'm actually happy about this one, simply because he gave it a new subject line and one appropriate to a discussion about Unicode. ChrisA -- http://mail.python.org/mailman/listinfo/python-list
Re: Is Unicode support so hard...
On Sat, Apr 20, 2013 at 8:02 PM, Benjamin Kaplan benjamin.kap...@case.edu wrote: On Sat, Apr 20, 2013 at 10:22 AM, Ned Batchelder n...@nedbatchelder.com wrote: On 4/20/2013 1:12 PM, jmfauth wrote: In a previous post, http://groups.google.com/group/comp.lang.python/browse_thread/thread/6aec70817705c226# , Chris “Kwpolska” Warrick wrote: “Is Unicode support so hard, especially in the 21st century?” -- Unicode is not really complicate and it works very well (more than two decades of development if you take into account iso-14). But, - I can say, as usual - people prefer to spend their time to make a better Unicode than Unicode and it usually fails. Python does not escape to this rule. - I'm busy with TeX (unicode engine variant), fonts and typography. This gives me plenty of ideas to test the flexible string representation (FSR). I should recognize this FSR is failing particulary very well... I can almost say, a delight. jmf Unicode lover I'm totally confused about what you are saying. What does make a better Unicode than Unicode mean? Are you saying that Python is guilty of this? In what way? Can you provide specifics? Or are you saying that you like how Python has implemented it? FSR is failing ... a delight? I don't know what you mean. --Ned. Don't bother trying to figure this out. jmfauth has been hijacking every thread that mentions Unicode to complain about the flexible string representation introduced in Python 3.3. Apparently, having proper Unicode semantics (indexing is based on characters, not code points) at the expense of performance when calling .replace on the only non-ASCII or BMP character in the string is a horrible bug. -- http://mail.python.org/mailman/listinfo/python-list Don’t forget the original context: this was a short remark to a guy I was responding to. His newsgroups software (slrn according to the headers) mangled the encoding of U+201C and U+201D in my From field, turning them into three question marks each. And jmf started a rant, as usual… PS. There are two fancy Unicode characters around. Can you find both of them, jmf? -- Kwpolska http://kwpolska.tk | GPG KEY: 5EAAEA16 stop html mail| always bottom-post http://asciiribbon.org| http://caliburn.nl/topposting.html -- http://mail.python.org/mailman/listinfo/python-list
Re: Is Unicode support so hard...
On 20/04/2013 19:02, Benjamin Kaplan wrote: On Sat, Apr 20, 2013 at 10:22 AM, Ned Batchelder n...@nedbatchelder.com wrote: On 4/20/2013 1:12 PM, jmfauth wrote: In a previous post, http://groups.google.com/group/comp.lang.python/browse_thread/thread/6aec70817705c226# , Chris “Kwpolska” Warrick wrote: “Is Unicode support so hard, especially in the 21st century?” -- Unicode is not really complicate and it works very well (more than two decades of development if you take into account iso-14). But, - I can say, as usual - people prefer to spend their time to make a better Unicode than Unicode and it usually fails. Python does not escape to this rule. - I'm busy with TeX (unicode engine variant), fonts and typography. This gives me plenty of ideas to test the flexible string representation (FSR). I should recognize this FSR is failing particulary very well... I can almost say, a delight. jmf Unicode lover I'm totally confused about what you are saying. What does make a better Unicode than Unicode mean? Are you saying that Python is guilty of this? In what way? Can you provide specifics? Or are you saying that you like how Python has implemented it? FSR is failing ... a delight? I don't know what you mean. --Ned. Don't bother trying to figure this out. jmfauth has been hijacking every thread that mentions Unicode to complain about the flexible string representation introduced in Python 3.3. Apparently, having proper Unicode semantics (indexing is based on characters, not code points) at the expense of performance when calling .replace on the only non-ASCII or BMP character in the string is a horrible bug. He can't complain about performance for the .replace issue any more as it's been fixed http://bugs.python.org/issue16061 Sadly he'll almost certainly have more edge cases up his sleeve while continuing to ignore minor issues like memory saving and correctness. -- If you're using GoogleCrap™ please read this http://wiki.python.org/moin/GoogleGroupsPython. Mark Lawrence -- http://mail.python.org/mailman/listinfo/python-list
Dear Human !!!!!!!!!!!!!!
Dear Human! The lecturer says at the beginning of the video, I am talking to you as a human; it does not matter whether you are Christian, Jew, Buddhist or Hindu. It does not matter whether you are a worshipper of idols, atheist, religious, secularist, a man or woman. I talk and address you as a human. Have you ever stopped and asked yourself one day the reason why you believe in what you believe in? Have you ever thought about the reason for which you chose the religion you practice? http://youtube.googleapis.com/v/-PcxpUGSmK0?rel=0 thank you -- http://mail.python.org/mailman/listinfo/python-list
Include and lib files for Visual Studio?
I am looking for the Python include and lib files for windows. I have a c++ project that I am importing into Visual Studio 2010 (express) and it links python. I need the include and lib files for windows. Where can I get them? I'd like to use python 3.3.1 if possible. I found the msi on python.org but is says they don't include source. I am assuming there is a dev sdk or something similar but can't seem to find it. -- http://mail.python.org/mailman/listinfo/python-list
Re: itertools.groupby
Jason Friedman jsf80238 at gmail.com writes: I have a file such as: $ cat my_data Starting a new group a b c Starting a new group 1 2 3 4 Starting a new group X Y Z Starting a new group I am wanting a list of lists: ['a', 'b', 'c'] ['1', '2', '3', '4'] ['X', 'Y', 'Z'] [] I wrote this: #!/usr/bin/python3 from itertools import groupby def get_lines_from_file(file_name): with open(file_name) as reader: for line in reader.readlines(): yield(line.strip()) counter = 0 def key_func(x): if x.startswith(Starting a new group): global counter counter += 1 return counter for key, group in groupby(get_lines_from_file(my_data), key_func): print(list(group)[1:]) I get the output I desire, but I'm wondering if there is a solution without the global counter. Here's a solution that makes use of groupby (which is a good idea I think), but avoids the counter (actually this is trivial; you just return the result of startswith directly). It also provides you with the rest of the separator line (you're using startswith in your code, so I figured you expect more on these lines). I replaced the startswith() with slicing though as this is usually faster. def separate_on(iterable, separator): sep_len=len(separator) grouped_iter = (x[1] for x in groupby(iterable, lambda line: line[:sep_len] == separator)) for separator_line in grouped_iter: rest_of_separator_line = next(separator_line)[sep_len:].strip() yield (rest_of_separator_line, [s.strip() for s in next(grouped_iter)]) then for sep_tail, group in separate_on(your_input,your_separator): do_what_ever() Hope it's what you want, Wolfgang -- http://mail.python.org/mailman/listinfo/python-list
Re: Include and lib files for Visual Studio?
On 2013.04.20 15:59, xuc...@gmail.com wrote: I am looking for the Python include and lib files for windows. I have a c++ project that I am importing into Visual Studio 2010 (express) and it links python. I need the include and lib files for windows. Where can I get them? I'd like to use python 3.3.1 if possible. Are they not in the source tarballs or VS debug info files archives on the 3.3.1 download page? -- CPython 3.3.1 | Windows NT 6.2.9200 / FreeBSD 9.1 -- http://mail.python.org/mailman/listinfo/python-list
Re: Include and lib files for Visual Studio?
On Sat, Apr 20, 2013 at 4:59 PM, xuc...@gmail.com wrote: I am looking for the Python include and lib files for windows. I have a c++ project that I am importing into Visual Studio 2010 (express) and it links python. I need the include and lib files for windows. Where can I get them? I'd like to use python 3.3.1 if possible. I found the msi on python.org but is says they don't include source. I am assuming there is a dev sdk or something similar but can't seem to find it. You are misinterpreting what you are reading. Install the msi. Look in C:\Python33\include and C:\Python33\libs -- http://mail.python.org/mailman/listinfo/python-list
Re: Cross-compiling Python for ARM?
On Tue, 16 Apr 2013 17:22:55 +0300, Anssi Saari a...@sci.fi wrote: In any case, cross compiling Python shouldn't be that hard. I just recently built 2.7.3 for my OpenWRT router since the packaged Python didn't have readline support (some long standing linking issue with readline and ncurses and uClibc). Thanks guys. Turns out the error was not due to Python, so the ARM version available in the Debian depot is fine. -- http://mail.python.org/mailman/listinfo/python-list
Re: Is Unicode support so hard...
Hi jmf, This gives me plenty of ideas to test the flexible string representation (FSR). I should recognize this FSR is failing particulary very well... This is too vague for me. Which string representation should Python use? 1) UTF-32 2) UTF-8 3) Python 3.3 -- 1, 2, or 4 bytes per character decided at runtime 4) Python 3.2 -- 2 or 4 bytes per character decided at Python build time 5) Something else Neil -- http://mail.python.org/mailman/listinfo/python-list
There must be a better way
Below is part of a script which shows the changes made to permit the script to run on either Python 2.7 or Python 3.2. I was surprised to see that the CSV next method is no longer available. Suggestions welcome. Colin W. def main(): global inData, inFile if ver == '2': headerLine= inData.next() else: # Python version 3.3 inFile.close() inFile= open('Don Wall April 18 2013.csv', 'r', newline= '') inData= csv.reader(inFile) headerLine= inData.__next__() -- http://mail.python.org/mailman/listinfo/python-list
Re: There must be a better way
On Sat, Apr 20, 2013 at 4:46 PM, Colin J. Williams c...@ncf.ca wrote: Below is part of a script which shows the changes made to permit the script to run on either Python 2.7 or Python 3.2. I was surprised to see that the CSV next method is no longer available. Suggestions welcome. snip if ver == '2': headerLine= inData.next() else: # Python version 3.3 snip headerLine= inData.__next__() Use the built-in next() function (http://docs.python.org/2/library/functions.html#next ) instead: headerLine = next(iter(inData)) Cheers, Chris -- http://mail.python.org/mailman/listinfo/python-list
Re: There must be a better way
On Sat, 20 Apr 2013 19:46:07 -0400, Colin J. Williams wrote: Below is part of a script which shows the changes made to permit the script to run on either Python 2.7 or Python 3.2. I was surprised to see that the CSV next method is no longer available. This makes no sense. What's the CSV next method? Are you talking about the csv module? It has no next method. py import csv py csv.next Traceback (most recent call last): File stdin, line 1, in module AttributeError: 'module' object has no attribute 'next' Please *define your terms*, otherwise we are flailing in the dark trying to guess what your code is supposed to do. The code you provide cannot possible work -- you use variables before they are defined, use other variables that are never defined at all, reference mysterious globals. You even close a file before it is opened! Please read this: http://sscce.org/ and provide a *short, self-contained, correct example* that we can actually run. But in the meantime, I'm going to consult the entrails and try to guess what you are doing: you're complaining that iterators have a next method in Python 2, and __next__ in Python 3. Am I correct? If so, this is true, but you should not be using the plain next method in Python 2. You should be using the built-in function next(), not calling the method directly. The plain next *method* was a mistake, only left in for compatibility with older versions of Python. Starting from Python 2.6 the correct way to get the next value from an arbitrary iterator is with the built-in function next(), not by calling a method directly. (In the same way that you get the length of a sequence by calling the built-in function len(), not by calling the __len__ method directly.) So provided you are using Python 2.6 or better, you call: next(inData) to get the next value, regardless of whether it is Python 2.x or 3.x. If you need to support older versions, you can do this: try: next # Does the built-in already exist? except NameError: # No, we define our own. def next(iterator): return iterator.next() then just use next(inData) as normal. -- Steven -- http://mail.python.org/mailman/listinfo/python-list
Re: itertools.groupby
On Sat, 20 Apr 2013 11:09:42 -0600, Jason Friedman wrote: I have a file such as: $ cat my_data Starting a new group a b c Starting a new group 1 2 3 4 Starting a new group X Y Z Starting a new group I am wanting a list of lists: ['a', 'b', 'c'] ['1', '2', '3', '4'] ['X', 'Y', 'Z'] [] I wrote this: [...] I get the output I desire, but I'm wondering if there is a solution without the global counter. I wouldn't use groupby. It's a hammer, not every grouping job is a nail. Instead, use a simple accumulator: def group(lines): accum = [] for line in lines: line = line.strip() if line == 'Starting a new group': if accum: # Don't bother if there are no accumulated lines. yield accum accum = [] else: accum.append(line) # Don't forget the last group of lines. if accum: yield accum -- Steven -- http://mail.python.org/mailman/listinfo/python-list
Re: There must be a better way
On 2013-04-21 00:06, Steven D'Aprano wrote: On Sat, 20 Apr 2013 19:46:07 -0400, Colin J. Williams wrote: Below is part of a script which shows the changes made to permit the script to run on either Python 2.7 or Python 3.2. I was surprised to see that the CSV next method is no longer available. This makes no sense. What's the CSV next method? Are you talking about the csv module? It has no next method. In 2.x, the csv.reader() class (and csv.DictReader() class) offered a .next() method that is absent in 3.x For those who use(d) the csv.reader object on a regular basis, this was a pretty common usage. Particularly if you had to do your own header parsing: f = open(...) r = csv.reader(f) try: headers = r.next() header_map = analyze(headers) for row in r: foo = row[header_map[FOO COLUMN]] process(foo) finally: f.close() (I did this for a number of cases where the client couldn't consistently send column-headers in a consistent capitalization/spaces, so my header-making function had to normalize the case/spaces and then reference the normalized names) So provided you are using Python 2.6 or better, you call: next(inData) to get the next value, regardless of whether it is Python 2.x or 3.x. If you need to support older versions, you can do this: try: next # Does the built-in already exist? except NameError: # No, we define our own. def next(iterator): return iterator.next() then just use next(inData) as normal. This is a good expansion of Chris Rebert's suggestion to use next(), as those of us that have to support pre-2.6 code lack the next() function out of the box. -tkc -- http://mail.python.org/mailman/listinfo/python-list
Re: Include and lib files for Visual Studio?
On 4/20/2013 4:59 PM, xuc...@gmail.com wrote: I am looking for the Python include and lib files for windows. I have a c++ project that I am importing into Visual Studio 2010 (express) and it links python. I need the include and lib files for windows. Where can I get them? I'd like to use python 3.3.1 if possible. I found the msi on python.org but is says they don't include source. I am assuming there is a dev sdk or something similar but can't seem to find it. If you want *everything*, clone hg.python.org/cpython (or something like that). PCBuild has Windows stuff, including VS .sln files. -- http://mail.python.org/mailman/listinfo/python-list
Re: There must be a better way
On 4/20/2013 8:34 PM, Tim Chase wrote: In 2.x, the csv.reader() class (and csv.DictReader() class) offered a .next() method that is absent in 3.x In Py 3, .next was renamed to .__next__ for *all* iterators. The intention is that one iterate with for item in iterable or use builtin functions iter() and next(). -- http://mail.python.org/mailman/listinfo/python-list
Re: Is Unicode support so hard...
On 04/20/2013 11:14 AM, Chris Angelico wrote: Flash forward to current date, and jmf has hijacked so many threads to moan about PEP 393 that I'm actually happy about this one, simply because he gave it a new subject line and one appropriate to a discussion about Unicode. +1000 -- http://mail.python.org/mailman/listinfo/python-list
Re: Is Unicode support so hard...
On Apr 21, 4:03 am, Neil Hodgson nhodg...@iinet.net.au wrote: Hi jmf, This gives me plenty of ideas to test the flexible string representation (FSR). I should recognize this FSR is failing particulary very well... This is too vague for me. Which string representation should Python use? 1) UTF-32 2) UTF-8 3) Python 3.3 -- 1, 2, or 4 bytes per character decided at runtime 4) Python 3.2 -- 2 or 4 bytes per character decided at Python build time 5) Something else jmf recommends UTF-8. Apart from the fact the UTF-8 would be less (time) performant in all cases and more extremely so in cases like indexing, the fact that jmf says so makes it more ridiculous. According to jmf python sucks up to ASCII (those big bad Americans… of whom Steven is the first…) whereas unicode is the true international/ universal standard. I guess the irony is clear to all (except jmf) given that: - its unicode that sucks up to ASCII by carefully conforming in the first 127 positions including the completely useless control chars; python just implements the standard - UTF-8 is an ASCII-biased unicode-compression method viz UTF-8 is most space-efficient on ASCII at the cost of being generally time- inefficient - All jmf's beefs (as far as I remember) are variations on the theme: time-inefficiency is equivalent to non-unicode-compliant In short he manifests a dog-in-the-manger mindset: Since the whole world will never speak french (grief, mope, grumble, thrash…) everyone should pay for the Chinese character set's size even if they are monolingually English All that said… I believe that the recent correction in unicode performance followed jmf's grumbles (Mark please correct me if I am wrong) So python community can be thankful to jmf even if he insists on laboring under bizarre political hallucinations. [Written from India where a monolingual person is as rare as a palmtree on a polecap] -- http://mail.python.org/mailman/listinfo/python-list
clear the screen
How to clear the screen? For example, in the two player game. One player sets a number and the second player guesses the number. When the first player enters the number, it should be cleared so that the second number is not able to see it. My question is how to clear the number. Thank you! -- http://mail.python.org/mailman/listinfo/python-list
Re: Is Unicode support so hard...
On Sat, 20 Apr 2013 18:37:00 -0700, rusi wrote: According to jmf python sucks up to ASCII (those big bad Americans… of whom Steven is the first…) Watch who you're calling an American, mate. -- Steven -- http://mail.python.org/mailman/listinfo/python-list
Re: Is Unicode support so hard...
On Sun, Apr 21, 2013 at 1:36 PM, Steven D'Aprano steve+comp.lang.pyt...@pearwood.info wrote: On Sat, 20 Apr 2013 18:37:00 -0700, rusi wrote: According to jmf python sucks up to ASCII (those big bad Americans… of whom Steven is the first…) Watch who you're calling an American, mate. I think he knows, and that's why he said it. You and I are foremost among Americans who are destroying Python. ChrisA -- http://mail.python.org/mailman/listinfo/python-list
[issue3693] Obscure array.array error message
Alexandre Vassalotti added the comment: Here's a patch to fix the exception. -- keywords: +patch Added file: http://bugs.python.org/file29949/fix_array_err_msg.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue3693 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17468] Generator memory leak
Nick Coghlan added the comment: I'll create a separate issue for the tp_del - __del__ question, since that's a language design decision that *does* need Guido's input, but doesn't relate to the broader question of generators, cycles and potential memory leaks. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17468 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17800] Expose __del__ when tp_del is populated from C code
New submission from Nick Coghlan: This came up in issue 17468: currently, populating tp_del from C (as generators now do) doesn't automatically create a __del__ wrapper visible from Python. The rationale given in the initial commit is that there's no need to define a wrapper, since tp_del won't be populated from C code (that will use tp_dealloc instead), but there's now at least one case where it *is* populated from C (generators), which means it behaves *as if* __del__ is defined (since the interpreter actually checks the tp_del slot), but *looks* like __del__ *isn't* defined (since there is no wrapper created). Independent of the memory leak concerns with generators defining tp_del, it would be better if a wrapper function was defined so the existence of the method was at least visible from Python code. -- components: Interpreter Core messages: 187409 nosy: ncoghlan priority: low severity: normal stage: needs patch status: open title: Expose __del__ when tp_del is populated from C code type: enhancement versions: Python 3.4 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17800 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17468] Generator memory leak
Nick Coghlan added the comment: Issue 17800 is anyone wants to weigh in on the tp_del - __del__ question (I ended up not adding Guido back to that one either, since the original design was based on an assumption that's now demonstrably false, so it makes sense to update the behaviour) -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17468 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17801] Tools/scripts/gprof2html.py: `#! /usr/bin/env python32.3`
New submission from C Anthony Risinger: http://hg.python.org/cpython/file/d499189e7758/Tools/scripts/gprof2html.py#l1 ...should be self explanatory. i didn't run into this myself, but i saw that the Archlinux package was fixing it via `sed`, without the customary link to upstream... so here it is ;) -- components: Demos and Tools messages: 187411 nosy: C.Anthony.Risinger priority: normal severity: normal status: open title: Tools/scripts/gprof2html.py: `#! /usr/bin/env python32.3` versions: Python 2.6, Python 2.7, Python 3.1, Python 3.2, Python 3.3, Python 3.4, Python 3.5 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17801 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17800] Expose __del__ when tp_del is populated from C code
Richard Oudkerk added the comment: Would this mean that the destructor could be run more than once (or prematurely)? -- nosy: +sbt ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17800 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17468] Generator memory leak
Anssi Kääriäinen added the comment: I wonder if it would be better to reword the garbage collection docs to mention that Python can't collect objects if they are part of a reference cycle, and some of the objects in the reference cycle need to run code at gc time. Then mention that such objects include objects with __del__ and also generators if they do have some other blocks than loops in them (for example try or with blocks). To me it seems this issue could be mitigated somewhat by collecting the objects if there is just one object with finalizer code in the cycle. It should be safe to run the single finalizer, then collect the whole cycle, right? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17468 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1635741] Interpreter seems to leak references after finalization
Changes by Martin Morrison m...@ensoft.co.uk: -- nosy: +isoschiz, pconnell ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue1635741 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17408] second python execution fails when embedding
Changes by Martin Morrison m...@ensoft.co.uk: -- nosy: +isoschiz, pconnell ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17408 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17802] html.HTMLParser raises UnboundLocalError:
New submission from Baptiste Mispelon: When trying to parse the string `ab`, the parser raises an UnboundLocalError: {{{ from html.parser import HTMLParser p = HTMLParser() p.feed('ab') p.close() Traceback (most recent call last): File stdin, line 1, in module File /usr/lib/python3.3/html/parser.py, line 149, in close self.goahead(1) File /usr/lib/python3.3/html/parser.py, line 252, in goahead if k = i: UnboundLocalError: local variable 'k' referenced before assignment }}} Granted, the HTML is invalid, but this error looks like it might have been an oversight. -- components: Library (Lib) messages: 187414 nosy: bmispelon priority: normal severity: normal status: open title: html.HTMLParser raises UnboundLocalError: type: crash versions: Python 3.3 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17802 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue9634] Add timeout parameter to Queue.join()
Changes by Martin Morrison m...@ensoft.co.uk: -- nosy: +isoschiz ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue9634 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17468] Generator memory leak
Antoine Pitrou added the comment: I wonder if it would be better to reword the garbage collection docs to mention that Python can't collect objects if they are part of a reference cycle, and some of the objects in the reference cycle need to run code at gc time. Then mention that such objects include objects with __del__ and also generators if they do have some other blocks than loops in them (for example try or with blocks). To me it seems this issue could be mitigated somewhat by collecting the objects if there is just one object with finalizer code in the cycle. It should be safe to run the single finalizer, then collect the whole cycle, right? This is tricky because it relies on the fact that the object with finalizer will be collected before any objects the finalizer relies on. For a generator, this means it has to be finalized before its frame object is cleared (otherwise, the finally block can't execute correctly). However, the GC doesn't collect those objects directly. It calls tp_clear on each of them, hoping that tp_clear will break the reference cycle (and at the same time collect some of the objects in the cycle). But it's difficult to influence which objects are collected first. In gcgen.py's case, the reference cycle is comprised of the MyObj instance, the generator, and the generator's frame: `self` (MyObj) - generator - frame - `self` (MyObj) So if `self` is collected first, it will collect the generator before the frame, and the generator's finally block can execute fine. But if the frame is collected first, it will clear itself and it will be too late for the generator's finally block to execute. And if the generator is collected first... well, it can't, as it doesn't have a tp_clear slot; but if it had, that would clear the frame and make the finally block uncallable. One might argue that a generator's tp_clear should call the finally block *before* clearing the frame. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17468 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17545] os.listdir and os.path.join inconsistent on empty path
Changes by R. David Murray rdmur...@bitdance.com: -- keywords: +easy stage: - needs patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17545 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17802] html.HTMLParser raises UnboundLocalError:
R. David Murray added the comment: Thanks for the report. Yes, that's in a complicated bit of error recovery code, and clearly you found a path through it that doesn't have a corresponding test :) -- keywords: +easy nosy: +ezio.melotti, r.david.murray stage: - needs patch type: crash - behavior versions: +Python 3.4 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17802 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17800] Expose __del__ when tp_del is populated from C code
Antoine Pitrou added the comment: Sounds reasonable to me. Note that it won't remove the special-casing in gcmodule.c: static int has_finalizer(PyObject *op) { if (PyGen_CheckExact(op)) return PyGen_NeedsFinalizing((PyGenObject *)op); else return op-ob_type-tp_del != NULL; } -- nosy: +pitrou ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17800 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17802] html.HTMLParser raises UnboundLocalError:
Changes by Ezio Melotti ezio.melo...@gmail.com: -- assignee: - ezio.melotti ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17802 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17803] Calling Tkinter.Tk() with a baseName keyword argument throws UnboundLocalError
New submission from Yasuhiro Fujii: Calling Tkinter.Tk() with baseName keyword argument throws UnboundLocalError on Python 2.7.4. A process to reproduce the bug: import Tkinter Tkinter.Tk(baseName=test) Traceback (most recent call last): File stdin, line 1, in module File /usr/lib/python2.7/lib-tk/Tkinter.py, line 1748, in __init__ if not sys.flags.ignore_environment: UnboundLocalError: local variable 'sys' referenced before assignment A patch to fix the bug: --- Lib/lib-tk/Tkinter.py.orig +++ Lib/lib-tk/Tkinter.py @@ -1736,7 +1736,7 @@ # ensure that self.tk is always _something_. self.tk = None if baseName is None: -import sys, os +import os baseName = os.path.basename(sys.argv[0]) baseName, ext = os.path.splitext(baseName) if ext not in ('.py', '.pyc', '.pyo'): -- components: Tkinter messages: 187418 nosy: y-fujii priority: normal severity: normal status: open title: Calling Tkinter.Tk() with a baseName keyword argument throws UnboundLocalError type: behavior versions: Python 2.7 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17803 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17468] Generator memory leak
Anssi Kääriäinen added the comment: I was imagining that the collection should happen in two passes. First check for tp_dels and call them if safe (single tp_del in cycle allowed). Then free the memory. The first pass is already there, it just doesn't collect callable tp_dels. If it would be possible to turn an object into inaccessible state after tp_del was called it would be possible to call multiple tp_dels in a cycle. If the del tries to use already deleted object you will get some sort of runtime exception indicating access of already collected object (another option is to allow access to object which has already had __del__ called, but that seems problematic). Now, both of the above are likely way more complicated to implement than what I imagine. I have very little knowledge of the technical details involved. If I understand correctly your idea was to do something similar to above, but only for generators. The current situation is somewhat ugly. First, I can imagine people wishing to do try-finally or something with self.lock: in a generator. These will leak in circular reference cases. The generator case is surprising, so surprising that even experienced programmers will do such mistakes (see the django ticket for one example). Second, from application programmers perspective the fact that __del__ wasn't called is usually similar to having a silent failure in their code - for example this can result in leaking database connections or other resources, not to mention possible problems if one has a with self.lock: block in a generator. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17468 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14531] Backtrace should not attempt to open stdin file
Changes by Martin Morrison m...@ensoft.co.uk: -- nosy: +isoschiz ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14531 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17272] request.full_url: unexpected results on assignment
R. David Murray added the comment: Thanks for working on this, Demian. I made some review comments, mostly style things about the tests. There's one substantial comment about the change in behaivor of the full_url property though (before patch it does not include the fragment, after the patch it does). We need to think about the implications of that change in terms of backward compatibility. It makes more sense, but how likely is it to break working code? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17272 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17795] backwards-incompatible change in SysLogHandler with unix domain sockets
Changes by Vinay Sajip vinay_sa...@yahoo.co.uk: -- hgrepos: +183 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17795 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17795] backwards-incompatible change in SysLogHandler with unix domain sockets
Changes by Vinay Sajip vinay_sa...@yahoo.co.uk: Added file: http://bugs.python.org/file29950/6e46f4e08717.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17795 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17795] backwards-incompatible change in SysLogHandler with unix domain sockets
Vinay Sajip added the comment: I've attached an alternative patch. The default socktype stays as socket.SOCK_DGRAM, but you can specify socktype=None to get the SOCK_DGRAM falling back to SOCK_STREAM behaviour. Can you confirm that this alternative approach works in your environment? (This patch is against the default branch.) -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17795 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17803] Calling Tkinter.Tk() with a baseName keyword argument throws UnboundLocalError
R. David Murray added the comment: Thanks for the report and patch. It would be nice to turn that test into a unit test. I've run the test on 3.4; this appears to be a 2.7 only bug. -- nosy: +r.david.murray stage: - test needed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17803 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17468] Generator memory leak
Benjamin Peterson added the comment: In a sense, doing something like with self.lock in a generator is already a leak. Even if there wasn't a cycle, collection could be arbitrarily delayed (in partincular on non-CPython VMs). I wonder if making generators context managers which call close() on exit would help. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17468 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17468] Generator memory leak
Antoine Pitrou added the comment: Those are two different issues: - not calling the finalizer in a timely manner - never calling the finalizer and creating a memory leak through gc.garbage -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17468 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14621] Hash function is not randomized properly
Changes by Martin Morrison m...@ensoft.co.uk: -- nosy: +isoschiz, pconnell ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14621 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17468] Generator memory leak
Benjamin Peterson added the comment: I realize, but if people were responsible and closed their generators, the second one would be as much of a problem. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17468 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue6359] pyexpat.c calls trace function incorrectly for exceptions
Ned Batchelder added the comment: Attached a patch which simply removes the code that invokes the trace function. -- keywords: +patch Added file: http://bugs.python.org/file29951/6539.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue6359 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17192] libffi-3.0.13 import
koobs added the comment: These break what was addressed in #11729 for default, 3,x and 3.3. 2.7 seems to have made it through unscathed. I'm not sure where or how the old code was introduced, but the clang fix has been upstreamed and is correct in the pure libffi 3.0.13 sources Failure to build ctypes can be seen here: http://buildbot.python.org/all/builders/AMD64%20FreeBSD%209.0%20dtrace%2Bclang%203.x/builds/1246/steps/test/logs/stdio http://buildbot.python.org/all/builders/AMD64%20FreeBSD%209.0%20dtrace%2Bclang%203.x/builds/1245/steps/test/logs/stdio http://buildbot.python.org/all/builders/AMD64%20FreeBSD%209.0%20dtrace%2Bclang%203.3/builds/538/steps/test/logs/stdio -- nosy: +koobs ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17192 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17800] Expose __del__ when tp_del is populated from C code
Benjamin Peterson added the comment: What exactly would calling such a wrapper do? -- nosy: +benjamin.peterson ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17800 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17646] traceback.py has a lot of code duplication
Martin Morrison added the comment: On 20/04/2013 03:54, Benjamin Peterson wrote: It would be great to have a test for that. :) I was afraid you'd say that. ;-) I'll look at adding test cases to cover the functions not currently covered (seems most of the print functions aren't, and all of the 'stack' functions aren't). -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17646 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17800] Expose __del__ when tp_del is populated from C code
Changes by Martin Morrison m...@ensoft.co.uk: -- nosy: +isoschiz ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17800 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17192] libffi-3.0.13 import
Changes by R. David Murray rdmur...@bitdance.com: -- status: closed - open ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17192 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17656] Python 2.7.4 breaks ZipFile extraction of zip files with unicode member paths
koobs added the comment: heads-up: Tests are still failing on FreeBSD (gcc clang) buildbots: http://buildbot.python.org/all/builders/AMD64%20FreeBSD%209.0%20dtrace%202.7/builds/472/steps/test/logs/stdio http://buildbot.python.org/all/builders/AMD64%20FreeBSD%209.0%20dtrace%2Bclang%202.7/builds/468/steps/test/logs/stdio -- nosy: +koobs ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17656 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17800] Expose __del__ when tp_del is populated from C code
Changes by Barry A. Warsaw ba...@python.org: -- nosy: +barry ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17800 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17468] Generator memory leak
Nick Coghlan added the comment: We can't make ordinary generators innately context managers, as it makes the error too hard to detect when you accidentally leave out @contextmanager when using a generator to write a custom one. You can already use contextlib.closing to forcibly close them when appropriate, so providing a decorator to implicitly map __exit__ to close wouldn't really save much. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17468 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue968063] Add fileinput.islastline()
Mark Lawrence added the comment: The latest patch still applies cleanly, can we have it reviewed please. -- nosy: +BreamoreBoy versions: +Python 3.4 -Python 3.3 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue968063 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17792] Unhelpful UnboundLocalError due to del'ing of exception target
Barry A. Warsaw added the comment: Ezio, the problem with your patch is that it also gives a warning on this code, which is totally safe: def good(): exc = None try: bar(int(sys.argv[1])) except KeyError as e: print('ke') exc = e except ValueError as e: print('ve') exc = e print(exc) -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17792 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17468] Generator memory leak
Nick Coghlan added the comment: To get back to Anssi's original suggestion... I think Anssi's proposal to allow finalisation to be skipped for try/except/else is legitimate. It's only finally clauses that we try to guarantee will execute, there's no such promise implied for ordinary except clauses. We *don't care* if the generator *would* have caught the thrown GeneratorExit, we only care about ensuring that finally blocks are executed (including those implied by with statements). So if there aren't any finally clauses or with statements in the block stack, we should be able to just let the generator and frame get collected (as Anssi suggested), without trying to allow execution to complete. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17468 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17800] Expose __del__ when tp_del is populated from C code
Nick Coghlan added the comment: Calling __del__ explicitly shouldn't be any worse than doing the same thing for any other type implemented in Python (or, in the case of generators, calling close() multiple times). What I'm mostly interested in is the can this type cause uncollectable cycles introspection aspect. However, as Antoine noted, generators are an interesting special case because the GC is able to *skip* finalising them in some cases, so exposing __del__ isn't right for them either (as that suggests they will *always* be uncollectable in a cycle, when that isn't the case). So now I'm wondering if a better answer may be to generalise the current generator special case to a __needsdel__ protocol: provide a __del__ method, but always make it possible for the GC to skip it when it wouldn't do anything (e.g. if you've already called close() explicitly). PyGenerator_NeedsFinalizing would then become the __needsdel__ impl for generators, and we could lose the special casing in the GC code. From Python, you could detect the three cases through: __del__ only: can cause uncollectable cycles __del__and __needsdel__: can cause uncollectable cycles, but it depends on the instance state Neither: can't cause uncollectable cycles -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17800 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17800] Expose __del__ when tp_del is populated from C code
Benjamin Peterson added the comment: I don't understand why we need to invent a protocol for this. The gc module already has methods and members for introspecting the collection. I don't think the gen special casing currently needs to be generalized. (What would use it?) -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17800 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17468] Generator memory leak
Antoine Pitrou added the comment: We *don't care* if the generator *would* have caught the thrown GeneratorExit, we only care about ensuring that finally blocks are executed (including those implied by with statements). So if there aren't any finally clauses or with statements in the block stack, we should be able to just let the generator and frame get collected (as Anssi suggested), without trying to allow execution to complete. That's a good point. I'm also contemplating that the generator close() could be done from the *frame*'s tp_clear, which would sidestep the issue of collect order entirely: the finally block doesn't actually need the generator to execute, it only needs the frame and its locals. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17468 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17800] Add gc.needs_finalizing() to check if an object needs finalising
Nick Coghlan added the comment: Yeah, I've figured out that rather than exposing __del__ if tp_del is populated, or generalising the generator special case, the simplest way to make this info accessible is to be able to ask the *garbage collector* if it thinks an object needs finalising. That actually makes this a pretty easy issue (as C issues go) - it's just a matter of exposing http://hg.python.org/cpython/file/default/Modules/gcmodule.c#l525 (has_finalizer) as gc.needs_finalizing. It will be easier once #17468 is done though, since that will make it clearer what the documentation should say. -- dependencies: +Generator memory leak keywords: +easy title: Expose __del__ when tp_del is populated from C code - Add gc.needs_finalizing() to check if an object needs finalising ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17800 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17547] checking whether gcc supports ParseTuple __format__... erroneously returns yes with gcc 4.8
Alex Leach added the comment: The configure.ac patch works for me, on x86_64 Arch Linux. I just updated to GCC-4.8.0 and came across an overwhelming number of these warnings when compiling extension modules. Thanks for the simple fix David. Tested on hg branch 2.7; the testsuite completes without error. -- nosy: +Alex.Leach versions: -Python 2.6, Python 2.7, Python 3.3, Python 3.4, Python 3.5 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17547 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16694] Add pure Python operator module
Roundup Robot added the comment: New changeset 97834382c6cc by Antoine Pitrou in branch 'default': Issue #16694: Add a pure Python implementation of the operator module. http://hg.python.org/cpython/rev/97834382c6cc -- nosy: +python-dev ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16694 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16694] Add pure Python operator module
Antoine Pitrou added the comment: I've now commited the latest patch. Thank you very much, Zachary! -- resolution: - fixed stage: commit review - committed/rejected status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16694 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17409] resource.setrlimit doesn't respect -1
Roundup Robot added the comment: New changeset 186f6bb3e46a by R David Murray in branch '3.3': #17409: Document RLIM_INFINITY and use it to clarify the setrlimit docs. http://hg.python.org/cpython/rev/186f6bb3e46a New changeset 9c4db76d073e by R David Murray in branch '2.7': #17409: Document RLIM_INFINITY and use it to clarify the setrlimit docs. http://hg.python.org/cpython/rev/9c4db76d073e New changeset f1d95b0ab66e by R David Murray in branch 'default': Merge #17409: Document RLIM_INFINITY and use it to clarify the setrlimit docs. http://hg.python.org/cpython/rev/f1d95b0ab66e -- nosy: +python-dev ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17409 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17409] resource.setrlimit doesn't respect -1
R. David Murray added the comment: There being no objection :) I've committed the patch. -- resolution: - fixed stage: commit review - committed/rejected status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17409 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16942] urllib still doesn't support persistent connections
Senthil Kumaran added the comment: Agree with Demian Brecht. This issue is being closed in when two issues cover the requirements discussed here. * issue# 16901 - For Enhancing FileCookieJar * issue# 9740 - For supporting persistant HTTP 1.1 connections. (:-( on me) -- nosy: +orsenthil resolution: - duplicate stage: needs patch - committed/rejected status: open - closed superseder: - In http.cookiejar.FileCookieJar() the .load() and .revert() methods don't work ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue16942 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17646] traceback.py has a lot of code duplication
Serhiy Storchaka added the comment: Could print_exception() in Lib/idlelib/run.py reuse new traceback functions? -- nosy: +serhiy.storchaka ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17646 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14805] Support display of both __cause__ and __context__
Serhiy Storchaka added the comment: And don't forget about print_exception() in Lib/idlelib/run.py. -- nosy: +serhiy.storchaka ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14805 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17409] resource.setrlimit doesn't respect -1
Paul Price added the comment: Thanks! -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17409 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue9682] socket.create_connection error message for domain subpart with invalid length is very confusing
R. David Murray added the comment: The message in both branches of the if talk about empty labels, which is probably my fault since I got the sense of the if wrong in my suggestion. One of them should be about the label being too long. The one that should be the 'empty' message also doesn't read right to my eyes. As I said I think it should be something like: 'empty label in %r % result.decode()'. Also, in looking at the module code, there are several other places where the size check and simple message are used. In all of these cases the string has already been confirmed to be (or converted to, in the case of the punycoding) ASCII. So we can abstract this check into a function and call it from all those locations. Do you want to update the patch accordingly, Mike? It will need more tests. -- assignee: loewis - type: enhancement - behavior versions: +Python 3.3, Python 3.4 -Python 3.2 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue9682 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17795] backwards-incompatible change in SysLogHandler with unix domain sockets
Mike Lundy added the comment: It doesn't fix it unless I change the configuration (and in some cases the code) for every SyslogHandler across all of our projects, plus every single library we use. Google around For SysLogHandler /dev/log socktype and then compare with SysLogHandler /dev/log. You won't find many hits where people set socktype, because people knew that SyslogHandler just Did The Right Thing when presented with an AF_UNIX. That has been the behavior since the logging module was introduced in 2.3. I'm just asking that you preserve the default behavior that has existed since python 2.3- that was the purpose of my patch. I'm not tied to how I implemented it (I mean, it is kind of ugly) but I believe preserving the behavior is important, and I also believe that it will break less code than what is currently there (because, after all, socktype was only introduced in 2.7, the SysLogHandler doesn't care if it's None, and subclasses couldn't have relied on it in the AF_UNIX case because the original fallback didn't update it) -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17795 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4934] tp_del and tp_version_tag undocumented
Alex Leach added the comment: I've just ran into tp_version_tag, when running the boost python testsuite and wondered what it was... Since upgrading to GCC 4.8, I've started to get a lot more warnings with Python extensions, e.g.:- boost/python/opaque_pointer_converter.hpp:122:14: warning: missing initializer for member ‘_typeobject::tp_version_tag’ [-Wmissing-field-initializers] In this instance the testsuite was made to compile with the '-Wextra' flag. The fix was pretty simple; add another zero to the opaque_pointer_convert struct. I have used the following preprocessor macro to test whether or not to do this. Would this be a good way to test for the addition? #if PY_VERSION_HEX = 0x0206 0,/* tp_version_tag */ #endif Cheers, Alex -- nosy: +Alex.Leach ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue4934 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14805] Support display of both __cause__ and __context__
Philip Jenvey added the comment: and the code module (after #17442 is resolved) -- nosy: +pjenvey ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue14805 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17656] Python 2.7.4 breaks ZipFile extraction of zip files with unicode member paths
Changes by Serhiy Storchaka storch...@gmail.com: -- nosy: +haypo ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17656 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17785] Use faster URL shortener for perf.py
Roundup Robot added the comment: New changeset 1488e1f55f61 by Alexandre Vassalotti in branch 'default': Issue #17785: Use a faster URL shortener for perf.py http://hg.python.org/benchmarks/rev/1488e1f55f61 -- nosy: +python-dev ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17785 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17785] Use faster URL shortener for perf.py
Changes by Alexandre Vassalotti alexan...@peadrop.com: -- resolution: - fixed stage: patch review - committed/rejected status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17785 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17656] Python 2.7.4 breaks ZipFile extraction of zip files with unicode member paths
Christian Heimes added the comment: it seems like file() can't handle unicode file names on FreeBSD. The FS encoding is 'US-ASCII' on Snakebite's FreeBSD box. /home/cpython/users/christian.heimes/2.7/Lib/zipfile.py(1078)_extract_member() - with self.open(member, pwd=pwd) as source, \ (Pdb) self.open(member, pwd=pwd) zipfile.ZipExtFile object at 0x801eb5fd0 (Pdb) n /home/cpython/users/christian.heimes/2.7/Lib/zipfile.py(1079)_extract_member() - file(targetpath, wb) as target: (Pdb) file(targetpath, wb) *** UnicodeEncodeError: 'ascii' codec can't encode characters in position 47-48: ordinal not in range(128) (Pdb) sys.getfilesystemencoding() 'US-ASCII' -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17656 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17720] pickle.py's load_appends should call append() on objects other than lists
Roundup Robot added the comment: New changeset 37139694aed0 by Alexandre Vassalotti in branch '3.3': Isuse #17720: Fix APPENDS handling in the Python implementation of Unpickler http://hg.python.org/cpython/rev/37139694aed0 -- nosy: +python-dev ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17720 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17720] pickle.py's load_appends should call append() on objects other than lists
Changes by Alexandre Vassalotti alexan...@peadrop.com: -- resolution: - fixed stage: patch review - committed/rejected status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17720 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17804] streaming struct unpacking
New submission from Antoine Pitrou: For certain applications, you want to unpack repeatedly the same pattern. This came in issue17618 (base85 decoding), where you want to unpack a stream of bytes as 32-bit big-endian unsigned ints. The solution adopted in issue17618 patch (struct.Struct(!{}I)) is clearly suboptimal. I would suggest something like a iter_unpack() function which would repeatedly yield tuples until the bytes object is over. -- components: Library (Lib) messages: 187455 nosy: mark.dickinson, meador.inge, pitrou, serhiy.storchaka priority: normal severity: normal status: open title: streaming struct unpacking type: enhancement versions: Python 3.4 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue17804 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com