ANN: A new version (0.2.7) of the Python module which wraps GnuPG has been released.
A new version of the Python module which wraps GnuPG has been released. What Changed? = This is a minor enhancement and bug-fix release. See the project website ( http://code.google.com/p/python-gnupg/ ) for more information. Summary: Better support for status messages from GnuPG. The ability to use symmetric encryption. The ability to receive keys from keyservers. The ability to use specific keyring files instead of the default keyring files. Internally, the code to handle Unicode and bytes has been tidied up. The current version passes all tests on Windows (CPython 2.4, 2.5, 2.6, 3.1, 2.7 and Jython 2.5.1) and Ubuntu (CPython 2.4, 2.5, 2.6, 2.7, 3.0, 3.1, 3.2). On Windows, GnuPG 1.4.11 has been used for the tests. What Does It Do? The gnupg module allows Python programs to make use of the functionality provided by the Gnu Privacy Guard (abbreviated GPG or GnuPG). Using this module, Python programs can encrypt and decrypt data, digitally sign documents and verify digital signatures, manage (generate, list and delete) encryption keys, using proven Public Key Infrastructure (PKI) encryption technology based on OpenPGP. This module is expected to be used with Python versions = 2.4, as it makes use of the subprocess module which appeared in that version of Python. This module is a newer version derived from earlier work by Andrew Kuchling, Richard Jones and Steve Traugott. A test suite using unittest is included with the source distribution. Simple usage: import gnupg gpg = gnupg.GPG(gnupghome='/path/to/keyring/directory') gpg.list_keys() [{ ... 'fingerprint': 'F819EE7705497D73E3CCEE65197D5DAC68F1AAB2', 'keyid': '197D5DAC68F1AAB2', 'length': '1024', 'type': 'pub', 'uids': ['', 'Gary Gross (A test user) gary.gr...@gamma.com']}, { ... 'fingerprint': '37F24DD4B918CC264D4F31D60C5FEFA7A921FC4A', 'keyid': '0C5FEFA7A921FC4A', 'length': '1024', ... 'uids': ['', 'Danny Davis (A test user) danny.da...@delta.com']}] encrypted = gpg.encrypt(Hello, world!, ['0C5FEFA7A921FC4A']) str(encrypted) '-BEGIN PGP MESSAGE-\nVersion: GnuPG v1.4.9 (GNU/Linux)\n \nhQIOA/6NHMDTXUwcEAf ... -END PGP MESSAGE-\n' decrypted = gpg.decrypt(str(encrypted), passphrase='secret') str(decrypted) 'Hello, world!' signed = gpg.sign(Goodbye, world!, passphrase='secret') verified = gpg.verify(str(signed)) print Verified if verified else Not verified 'Verified' For more information, visit http://code.google.com/p/python-gnupg/ - as always, your feedback is most welcome (especially bug reports, patches and suggestions for improvement). Enjoy! Cheers Vinay Sajip Red Dove Consultants Ltd. -- http://mail.python.org/mailman/listinfo/python-announce-list Support the Python Software Foundation: http://www.python.org/psf/donations/
Re: Argument of the bool function
On 2011-04-09 23:15 , rusi wrote: On Apr 10, 8:35 am, Grant Edwardsinva...@invalid.invalid wrote: On 2011-04-09, Lie Ryanlie.1...@gmail.com wrote: On 04/09/11 08:59, candide wrote: Le 09/04/2011 00:03, Ethan Furman a ?crit : bool([x]) dir([object]) Not very meaningful, isn't it ? The error says it unambiguously, dir() does not take *keyword* arguments; instead dir() takes *positional* argument: dir(Explicit is better than implicit) I think the point is that both cases are documented exactly the same. In what case(s) would a keyword arg to bool be reasonable? It's just an implementation detail. It's not worth the electrons wasted in this thread already. -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco -- http://mail.python.org/mailman/listinfo/python-list
Re: Retrieving Python Keywords
Le 10/04/2011 04:09, John Connor a écrit : Actually this is all it takes: import keywords print keywords.kwlist import keywords Traceback (most recent call last): File stdin, line 1, in module ImportError: No module named keywords so I considered first it was a joke ! ;) In fact the import doesn't need plural, and... Python is very very introspective ;) Thanks and thanks to Steven too. -- http://mail.python.org/mailman/listinfo/python-list
Re: Retrieving Python Keywords
Le 10/04/2011 04:01, Terry Reedy a écrit : Yes. (Look in the manuals, I did : my main reference book is the Martelli's /Python in a Nutshell/ and the index doesn't refer to the keyword import or try the obvious imports ;-) The only obvious I saw was sys module. -- http://mail.python.org/mailman/listinfo/python-list
OOP only in modules
Hi all, from the subject of my post, you can see I do not like very much OOP... and I am not the only one... Knowing that python is intrinsecally OO, I propose to move all OOP stuff (classes, instances and so on) to modules. In this way the OOP fan can keep on using it, but in a module recalled by import in the user script. The advantage is that the user can call function and methods by a well-known sintax. Not to mention the sharp increase in re-usability of code... Bye. -- http://mail.python.org/mailman/listinfo/python-list
design question
I've been wondering for weeks now how to do but I still didn't get a satisfying answer, so I hope someone can give a hint... I have some logs which I extract from simulation results. These logs are in the form timestamp, nodeid, eventname, event_argument and now I have to analyze the data. I don't have so many metrics to analyze but for example I might look for symmetric events, in the form 0.0, 0: DATA_SENT(1) 0.1, 1: DATA_RECEIVED(0) (so in short the nodeid and the argument are swapped) After many changes now the timeline is something as below, where everything almost all the functions are static methods and return a new list of events. But now I don't get anything I like for the metrics. Every metric only has to filter and return a result, but the problem is that maybe for some metrics I want the result on all the nodes. Some only the average, sometimes I want to correlated with the time (or time slots) and so on. So it must be very flexible, but at the same time not a pain to write... I first wrote a class for every metric subclassing a Metric, but it wasn't nice. Then I tried many other ways but actually I'm a bit stuck... Any suggestions from someone that maybe had a similar problem? --8---cut here---start-8--- class Timeline(object): A timeline is generated by the list of events happened during the simulation and it's the main point to analyze the data. def __init__(self, events=None): if events is None: self.events = [] else: self.events = events def __str__(self): res = [] for evt_tuple in iter(self): res.append(str(evt_tuple)) return \n.join(res) def __eq__(self, other): return self.events == other.events def __iter__(self): return iter(sorted(self.events, key=lambda x: x.time)) def add(self, timestamp, nodeid, evt): Add one line for the given nodeid tup = EventTuple(timestamp, nodeid, evt) self.events.append(tup) def add_tuple(self, tup): self.events.append(tup) def first_event(self, _evt): first_evt = Timeline.filt(self.events, evt=_evt)[0] return pretty_time(first_evt.time) def sort(self): Timeline.sort_by(self.events) @staticmethod def sort_by(events, par='time'): # side effecting method events.sort(key=lambda x: x.__getattribute__(par)) @staticmethod def filt(events, node=None, evt=None, arg=None): Takes a node, an event and a function to further filter the time # copy locally the events and filter them res = events[:] if node is not None: res = filter(lambda x: x.node == node, res) if evt is not None: res = filter(lambda x: x.evt.name == evt, res) if arg is not None: res = filter(lambda x: x.evt.arg == arg, res) return res ... --8---cut here---end---8--- -- http://mail.python.org/mailman/listinfo/python-list
Re: OOP only in modules
newpyth newp...@gmail.com writes: Hi all, from the subject of my post, you can see I do not like very much OOP... and I am not the only one... Knowing that python is intrinsecally OO, I propose to move all OOP stuff (classes, instances and so on) to modules. In this way the OOP fan can keep on using it, but in a module recalled by import in the user script. The advantage is that the user can call function and methods by a well-known sintax. Not to mention the sharp increase in re-usability of code... Bye. OOP makes like easier also to the user, if you don't like it write your own non-OOP wrappers around the OOP functions ;) But I think that you should probably use another language if you don't like OOP so much... -- http://mail.python.org/mailman/listinfo/python-list
Timebased Function Scheduler with Pause and UnPause functionality
I wrote some dirty script for Function Scheduling and pausing them ! May some look and point mistakes, I am totally new to python ! https://gist.github.com/911942 -- ┌─┐ │ Narendra Sisodiya │ http://narendrasisodiya.com └─┘ -- http://mail.python.org/mailman/listinfo/python-list
Re: DOCTYPE + SAX
What you suggested solved my problem, but unfortunately it did reveal that the HTML that I was parsing was not compliant with the DTD that it should have been. There were a lot of missing end tags. In light of this frustrating problem i've gone back to the source docbook code. There are many isolated XML files with content that I want to parse out. One example that I am focussing on starts with this… ?xml version='1.0' encoding='iso-8859-1'? appendix xmlns=http://docbook.org/ns/docbook; xml:id=indexes titleFunctionIndex;/title My xml.sax parser fails with… phpdoc/doc-base/funcindex.xml:3:8: undefined entity I went looking for FunctionIndex and grep told me… phpdoc/en/language-defs.ent:!ENTITY FunctionIndex Function Index …and so I look for pages that explicitly reference language-defs.ent and rep tells me… phpdoc/doc-base/install-unix.xml:!ENTITY % language-defs SYSTEM ./en/language-defs.ent By this stage, i'm a bit tangled up. Is a SAX parser the right way to parse docbooks when they have locally defined external entities? I am hoping to extract structured information from this documentation to present in another format. -- http://mail.python.org/mailman/listinfo/python-list
Encoding problem when launching Python27 via DOS
I created a simple program which writes in a unicode files some french text with accents! *# -*- coding: cp1252 -*-* *#!/usr/bin/python* *'''* *Created on 27 déc. 2010* * * *@author: jpmena* *'''* *from datetime import datetime* *import locale* *import codecs* *import os,sys* * * *class Log(object):* *log=None* *def __init__(self,log_path,charset_log=None):* *self.log_path=log_path* *if(os.path.exists(self.log_path)):* *os.remove(self.log_path)* *#self.log=open(self.log_path,'a')* *if charset_log is None:* *self.charset_log=sys.getdefaultencoding()* *else:* *self.charset_log=charset_log* *self.log=codecs.open(self.log_path, a, charset_log)* ** *def getInstance(log_path=None):* *print encodage systeme:+sys.getdefaultencoding()* *if Log.log is None:* *if log_path is None:* *log_path=os.path.join(os.getcwd(),'logParDefaut.log')* *Log.log=Log(log_path)* *return Log.log* ** *getInstance=staticmethod(getInstance)* ** *def p(self,msg):* *aujour_dhui=datetime.now()* *date_stamp=aujour_dhui.strftime(%d/%m/%y-%H:%M:%S)* *print sys.getdefaultencoding()* *unicode_str=u'%s : %s \n' % (date_stamp,msg.encode(self.charset_log,'replace'))* *self.log.write(unicode_str)* *return unicode_str* ** *def close(self):* *self.log.flush()* *self.log.close()* *return self.log_path* * * *if __name__ == '__main__':* *l=Log.getInstance()* *l.p(premier message de Log à accents)* *Log.getInstance().p(second message de Log)* *l.close()* I am using PyDev/Aptana for developping. Il Aptana lanches the program everything goes well!!! sys.getdefaultencoding() answers 'cp1252' But if I execute the following batch file in a DOS console on my Windows VISTA: *@echo off* *setlocal* *chcp 1252* *set PYTHON_HOME=C:\Python27* *for /F tokens=1-4 delims=/ %%i in ('date /t') do (* *if %%l== (* *:: Windows XP* *set D=%%k%%j%%i* * ) else (* *:: Windows NT/2000* *set D=%%l%%k%%j* * )* *)* *set PYTHONIOENCODING=cp1252:backslashreplace* *%PYTHON_HOME%\python.exe %~dp0\src\utils\Log.py* the answer is: *C:\Users\jpmena\Documents\My Dropbox\RIF\Python\VelocityTransformsgenerationPro* *grammeSitePublicActuel.cmd* *Page de codes active : 1252* *encodage systeme:ascii* *ascii* *Traceback (most recent call last):* * File C:\Users\jpmena\Documents\My Dropbox\RIF\Python\VelocityTransforms\\src\* *utils\Log.py, line 51, in module* *l.p(premier message de Log à accents)* * File C:\Users\jpmena\Documents\My Dropbox\RIF\Python\VelocityTransforms\\src\* *utils\Log.py, line 40, in p* *unicode_str=u'%s : %s \n' % (date_stamp,msg.encode(self.charset_log,'replac* *e'))* *UnicodeDecodeError: 'ascii' codec can't decode byte 0xe0 in position 23: ordinal* * not in range(128)* sys.getdefaultencoding answers ascii so the encode function cannot encode the accent in 'à' I am using Python27 because it is compatible with the actual versions of pyodbc (for accessinf a ACCESS database) and airspeed (Velocity Templates in utf-8) The target is to launch airspeed applications via the Windows CRON Can someone help me. I am really stuck! Thanks... -- http://mail.python.org/mailman/listinfo/python-list
Re: Argument of the bool function
Le 08/04/2011 18:41, Benjamin Kaplan a écrit : bool(x=5) is just passing the value 5 as the argument x to the function. Anyway, passing x as a keyword argument to the bool function appears to be very rare : i did a regexp search for about 3 source-code Python files (among them official Python source-code, Django, Sphinx, Eric source-code and many more sources of valuable Python code) and I didn't find even one. -- http://mail.python.org/mailman/listinfo/python-list
Re: Argument of the bool function
On Sun, Apr 10, 2011 at 10:54 PM, candide candide@free.invalid wrote: Anyway, passing x as a keyword argument to the bool function appears to be very rare : i did a regexp search for about 3 source-code Python files (among them official Python source-code, Django, Sphinx, Eric source-code and many more sources of valuable Python code) and I didn't find even one. Who would use keyword arguments with a function that takes only one arg anyway? ChrisA -- http://mail.python.org/mailman/listinfo/python-list
Re: design question
Andrea Crotti andrea.crott...@gmail.com writes: [...] I left the Timeline as before, but tried to rewrite some more classes. This is the abstract class for a metric, and below another class for the metric which involve only counting things. In the end an example on how to use this. I need to see synthetic values during my quick testings but also to reuse a lot of data to actually create graphs later... Any idea/suggestion is welcome, I would like finally to get it right... --8---cut here---start-8--- class AbsMetric(object): def __init__(self, mode='all'): # mode can be 'all, avg, senders, landmarks, receivers, mobiles' self.mode = mode self.result = {} def __str__(self): res = [] for k, val in self.result.items(): res.append(%d: %f % (k, val)) return '\n'.join(res) def __len__(self): return len(self.events) def filt_node(self, network): modes = { 'all': network.nodes, 'senders' : network.senders, 'landmarks' : network.lands, 'receivers' : network.receivers, 'mobiles' : network.mobiles } return modes[self.mode] --8---cut here---end---8--- --8---cut here---start-8--- class CountingMetric(AbsMetric): def __init__(self, mode, to_count): super(CountingMetric, self).__init__(mode) self.to_count = to_count def compute(self, timeline, network): for n in self.filt_node(network): # there is a lot of computation going on every time evts = Timeline.filt(timeline.events, evt=self.to_count, node=n) if len(evts) 0: self.result[n] = len(evts) addr_update = CountingMetric('all', 'ADDRESS_UPDATE_SENT') --8---cut here---end---8--- -- http://mail.python.org/mailman/listinfo/python-list
Re: OOP only in modules
On Sun, 10 Apr 2011 03:35:48 -0700, newpyth wrote: Hi all, from the subject of my post, you can see I do not like very much OOP... and I am not the only one... Knowing that python is intrinsecally OO, I propose to move all OOP stuff (classes, instances and so on) to modules. Python is based on objects, but it is not an object-oriented language. It is a multi-paradigm language: it includes elements of OOP, functional, procedural and imperative programming. Some of these are fundamental to Python: the import statement is pure imperative style. For example, Python has: import module # imperative style len(mylist) # procedural map(func, sequence) # functional mylist.sort() # object-oriented With a third-party package, Pyke, you can use Prolog-style logic programming: http://pyke.sourceforge.net/ (albeit with a procedural syntax). There are probably third-party packages for agent-based programming as well. If you don't like OOP, you can write your code using a functional style, or a procedural style. List comprehensions and generator expressions are *very* common in Python, which come from functional and pipeline styles of programming. If you really, really hate OOP, you can even write your own wrappers for Python objects. Unlike Java, Python encourages by example the use of shallow class hierarchies. Most Python classes are only two levels deep: object +-- list +-- tuple +-- dict +-- set etc. instead of the deep, complex hierarchies beloved by some OOP languages: Object +-- Collection +-- Sequence | +-- MutableSequence | | +-- IndexableMutableSequence | | +-- SortableIndexableMutableSequence | | +-- SortableIndexableMutableSequenceArray | | +-- List | +-- ImmutableSequence | +-- IndexableImmutableSequence | +-- SortableIndexableImmutableSequence | +-- SortableIndexableImmutableSequenceArray | +-- Tuple +-- Mapping etc. So while everything in Python is an objects, the language itself is only partly object oriented, and it rarely gets in the way. The OO aspect of Python is mostly syntax and namespaces. -- Steven -- http://mail.python.org/mailman/listinfo/python-list
Re: How to program in Python to run system commands in 1000s of servers
On Apr 8, 5:40 am, Thomas Rachel nutznetz-0c1b6768-bfa9-48d5- a470-7603bd3aa...@spamschutz.glglgl.de wrote: Am 07.04.2011 21:14, schrieb Anssi Saari: Chris Angelicoros...@gmail.com writes: Depending on what exactly is needed, it might be easier to run a separate daemon on the computers, one whose sole purpose is to do the task / get the statistics needed and return them. Then the Python script need only collect each program's returned response. Those would still need to be deployed somehow to the thousands of machines though. But only once... I realized after posting that something like pexpect might work for stuffing the keystrokes needed to root login via ssh to all machines and such... If that's what he needs to do, since it wasn't very clear. Maybe that works. But it is much, much worse than using keys... Thomas Thank you all for various ideas. Let me give some background and more information here. Reason that we cannot use root trusted ssh is a Internal Information Security decision. Given that we have this restriction, I wanted to explore what other creative options we have so that we can still accomplish this. In our enterprise environment, quick production support is very important. An application problem troubleshooting might require we check various status on multiple servers quickly. So we need to execute commands depending on the situation. Let me summarize some of the ideas presented in this thread. 1. Use pexpect to login and become root(or sudo - yes sudo is allowed) on the remote machines 2. run a daemon on each server, which will respond to client requests 3. run your program through cron and collect data and dump into a database which can be used for query later [ yes - this is on plate ] 4. Use fabric (fabile.org) for developing program. Does this assume that ssh root trust is already in place? Are there any more different approaches? I suppose if we take the daemon approach then we can make it as a webservice as well? -- http://mail.python.org/mailman/listinfo/python-list
Re: How to program in Python to run system commands in 1000s of servers
On Mon, Apr 11, 2011 at 12:22 AM, Babu bab...@gmail.com wrote: Are there any more different approaches? I suppose if we take the daemon approach then we can make it as a webservice as well? Yes, your daemon could function via HTTP. But if you go that route, you would need some way to collect all the different computers' results. For example, suppose you build your daemon to respond to HTTP requests on port 8000, with a document name like /status. You could then retrieve _one_ computer's status by pointing your browser to http://computername/status - but that's only one. You would then need a wrapper somewhere to collect them, for instance: iframe src=http://computer1/status;/iframe iframe src=http://computer2/status;/iframe iframe src=http://computer3/status;/iframe etc. If you're always getting status on the same set of computers (or a few standard sets of computers), this could be a simple .HTML file that you have on your hard disk; otherwise, you may want to consider another web server that lets you tick which ones to query, and builds an iframe list from your selections. Chris Angelico -- http://mail.python.org/mailman/listinfo/python-list
Re: Multiprocessing, shared memory vs. pickled copies
On 9 apr, 22:18, John Ladasky lada...@my-deja.com wrote: So, there are limited advantages to trying to parallelize the evaluation of ONE cascade network's weights against ONE input vector. However, evaluating several copies of one cascade network's output, against several different test inputs simultaneously, should scale up nicely. You get an 1 by n Jacobian for the errors, for which a parallel vector math library could help. You will also need to train the input layer (errors with an m by n Jacobian), with QuickProp or LM, to which an optimized LAPACK can give you a parallel QR or SVD. So all the inherent parallelism in this case can be solved by libraries. Well, I thought that NUMPY was that fast library... NumPy is a convinient container (data buffer object), and often an acceptaby fast library (comparable to Matlab). But comparted to raw C or Fortran it can be slow (NumPy is memory bound, not compute bound), and compared to many 'performance libraires' NumPy code will run like turtle. My single-CPU neural net training program had two threads, one for the GUI and one for the neural network computations. Correct me if I'm wrong here, but -- since the two threads share a single Python interpreter, this means that only a single CPU is used, right? I'm looking at multiprocessing for this reason. Only access to the Python interpreter i serialised. It is a common misunderstanding that everything is serialized. Two important points: 1. The functions you call can spawn threads in C or Fortran. These threads can run feely even though one of them is holding the GIL. This is e.g. what happens when you use OpenMP with Fortran or call a performance library. This is also how Twisted can implement asynchronous i/o on Windows using i/o completion ports (Windows will set up a pool of background threads that run feely.) 2. The GIL can be released while calling into C or Fortran, and reacquired afterwards, so multiple Python threads can execute in parallel. If you call a function you know to be re-entrant, you can release the GIL when calling it. This is also how Python threads can be used to implement non-blocking i/o with functions like file.read (they release the GIL before blocking on i/o). The extent to which these two situations can happen depends on the libraries you use, and how you decide to call them. Also note that NumPy/SciPy can be compiled againt different BLAS/ LAPACK libraries, some of which will use multithreading internally. Note that NumPy and SciPy does not release the GIL as often as it could, which is why I often prefer to use libraries like LAPACK directly. In your setup you should relese the GIL around computations to prevent the GUI from freezing. Sturla -- http://mail.python.org/mailman/listinfo/python-list
ANN: A new version (0.2.7) of the Python module which wraps GnuPG has been released.
A new version of the Python module which wraps GnuPG has been released. What Changed? = This is a minor enhancement and bug-fix release. See the project website ( http://code.google.com/p/python-gnupg/ ) for more information. Summary: Better support for status messages from GnuPG. The ability to use symmetrix encryption. The ability to receive keys from keyservers. The ability to use specific keyring files instead of the default keyring files. Internally, the code to handle Unicode and bytes has been tidied up. The current version passes all tests on Windows (CPython 2.4, 2.5, 2.6, 3.1, 2.7 and Jython 2.5.1) and Ubuntu (CPython 2.4, 2.5, 2.6, 2.7, 3.0, 3.1, 3.2). On Windows, GnuPG 1.4.11 has been used for the tests. What Does It Do? The gnupg module allows Python programs to make use of the functionality provided by the Gnu Privacy Guard (abbreviated GPG or GnuPG). Using this module, Python programs can encrypt and decrypt data, digitally sign documents and verify digital signatures, manage (generate, list and delete) encryption keys, using proven Public Key Infrastructure (PKI) encryption technology based on OpenPGP. This module is expected to be used with Python versions = 2.4, as it makes use of the subprocess module which appeared in that version of Python. This module is a newer version derived from earlier work by Andrew Kuchling, Richard Jones and Steve Traugott. A test suite using unittest is included with the source distribution. Simple usage: import gnupg gpg = gnupg.GPG(gnupghome='/path/to/keyring/directory') gpg.list_keys() [{ ... 'fingerprint': 'F819EE7705497D73E3CCEE65197D5DAC68F1AAB2', 'keyid': '197D5DAC68F1AAB2', 'length': '1024', 'type': 'pub', 'uids': ['', 'Gary Gross (A test user) gary.gr...@gamma.com']}, { ... 'fingerprint': '37F24DD4B918CC264D4F31D60C5FEFA7A921FC4A', 'keyid': '0C5FEFA7A921FC4A', 'length': '1024', ... 'uids': ['', 'Danny Davis (A test user) danny.da...@delta.com']}] encrypted = gpg.encrypt(Hello, world!, ['0C5FEFA7A921FC4A']) str(encrypted) '-BEGIN PGP MESSAGE-\nVersion: GnuPG v1.4.9 (GNU/Linux)\n \nhQIOA/6NHMDTXUwcEAf ... -END PGP MESSAGE-\n' decrypted = gpg.decrypt(str(encrypted), passphrase='secret') str(decrypted) 'Hello, world!' signed = gpg.sign(Goodbye, world!, passphrase='secret') verified = gpg.verify(str(signed)) print Verified if verified else Not verified 'Verified' For more information, visit http://code.google.com/p/python-gnupg/ - as always, your feedback is most welcome (especially bug reports, patches and suggestions for improvement). Enjoy! Cheers Vinay Sajip Red Dove Consultants Ltd. -- http://mail.python.org/mailman/listinfo/python-list
Re: Multiprocessing, shared memory vs. pickled copies
Now, I don't know that I actually HAVE to pass my neural network and input data as copies -- they're both READ-ONLY objects for the duration of an evaluate function (which can go on for quite a while). One option in that case is to use fork (if you're on a *nix machine). See http://pythonwise.blogspot.com/2009/04/pmap.html for example ;) HTH -- Miki Tebeka miki.teb...@gmail.com http://pythonwise.blogspot.com -- http://mail.python.org/mailman/listinfo/python-list
Re: Argument of the bool function
Chris Angelico wrote: Who would use keyword arguments with a function that takes only one arg anyway? It's hard to imagine. Maybe somebody trying to generalize function calls (trying to interpret some other language using a python program?) # e.g. input winds up having the effect of .. function = bool name = 'x' value = 'the well at the end of the world' ## ... actions.append ((function, {name:value})) ## ... for function, args in actions: results.append (function (**args)) Not something I, for one, do every day. But regularity in a language is good when you can get it, especially for abstract things like that. I can sort of guess that `dir` was perhaps coded in C for speed and doesn't spend time looking for complicated argument lists. Python is a pragmatic language, so all the rules come pre-broken. Mel. -- http://mail.python.org/mailman/listinfo/python-list
Re: Multiprocessing, shared memory vs. pickled copies
On 4/10/2011 9:11 AM, Miki Tebeka wrote: Now, I don't know that I actually HAVE to pass my neural network and input data as copies -- they're both READ-ONLY objects for the duration of an evaluate function (which can go on for quite a while). One option in that case is to use fork (if you're on a *nix machine). See http://pythonwise.blogspot.com/2009/04/pmap.html for example ;) Unless you have a performance problem, don't bother with shared memory. If you have a performance problem, Python is probably the wrong tool for the job anyway. John Nagle -- http://mail.python.org/mailman/listinfo/python-list
Re: OOP only in modules
Hi all, I must thank before Andrea Crotti and Steven D'Aprano, which kindly replayed to my post... they deserve an answer. To Andrea Crotti's OOP makes life easier also to the user... that is NOT my experience... I'm not pretending that everyone else thinks like me (also if many people do... load any search engine with againt OOP or criticisms againt OO to verify...) I was trying to get caller-callee tree from the module trace (like do cflow with C sources, together with xref)... I think that UML is a loss of time... trace has three classes whose methods I can't easily arrange in the caller-callee tree, mainly because IMHO they are similar to functions declared inside another function. So I taught to move classes in a module (trace_classes.py) saved in the same folder of the python source to be traced. Only using from trace_classes import * in the trace_noclasses.py which contained the residual part of trace after removing classes, you had the same result as the original trace... In fact python trace -t mysource.py worked the same as: python trace_noclasses.py -t mysource.py if you of course load the classes by from trace_classes import * (as mentioned before) To trace the module trace.py you can use: python trace.py -t trace.py - t mysource.py (it seems that you must include at least a source to be traced.) The problems arise if you want to use the standard import...with the two components of trace (w/ or w/o classes)... because of the instances or the obiects defined with class template... For me its enough to study a module with classes and instances defined outside them and can use it without referring to internal istances... Do you know a module of this kind (trace itself could be the answer but the source is too complicated for me...) I would not like to be compelled to revert to another language as the suggestion of Andrea Crotti (I think that you should probably use another language if you don't like OOP so much...) As far Steven D'Aprano and his Python is based on objects, but it is not an object-oriented language. is concerned, I could agree with him (... with some difficulty, however...) For him (and I agree) python is a multi-paradigm language: it includes elements of OOP, functional, procedural and imperative programming, but the OO example is merily mylist.sort() # object- oriented, without citing the classes and the multiple inheritance or other obscure property. My main goal is to arrange OO in a paradigmatic manner in order to apply to it the procedural scheme. especially to the caller or called modules. Bye. -- http://mail.python.org/mailman/listinfo/python-list
Re: [OT] Free software versus software idea patents
On Sat, 2011-04-09 at 23:55 +, Steven D'Aprano wrote: On Fri, 08 Apr 2011 01:37:45 -0500, harrismh777 wrote: Steven D'Aprano wrote: The reason Mono gets hit (from others besides me) is that they are in partnership and collaboration with Microsoft, consciously and unconsciously. This must be punished. Just like Python, Apache, and the Linux kernel. What are you going to do to punish them? What do you mean 'just like?They are nothing alike. All three of Python, Apache and Linux have accepted donations from Microsoft. Microsoft is a corporate sponsor of the PSF. Microsoft is not in the business of donating money and time to competitors out of the goodness of their heart. If Microsoft is giving them money or code, they must be getting something out of it. All three projects actively collaborate with Microsoft from time to time, some more than others. .NET's IronPython is one of the big four Python implementations (the others being CPython, PyPy, and Jython), and actively supported by Microsoft. What's good for Python is good for IronPython and Microsoft. Perhaps I should also have included Firefox and Thunderbird, which actively court Windows users and developers, sometimes at the expense of Linux users (e.g. the use of SQLite), thus legitimizing Windows as an OS for the FOSS community as well as improving the user-experience for Windows users. Or Samba, which doesn't merely compete with Microsoft's SMB, but spreads it into the Unix world and legitimizes Microsoft's protocol among FOSS users. The Samba project has even worked side-by-side with Microsoft to solve technical issues with Vista connectivity (or at least the Samba-TNG project has). You paint a very attractive picture of Good versus Evil, but real life is not as black and white as you make out. Mono fails to live up to your extremely high standards of FOSS purity, and so you dump on it. But so do some of the most important and widespread FOSS projects, yet you give them a free pass. Curious. -- Steven It's not Good vs. Evil, it's business vs. consumers. Microsoft, as a business, can choose what it wants to charge, how it wants to license it's product, and which itellectual property it wants to secure. Consumers, on the other hand, can choose to buy a product or not. If this ends up hurting Microsoft, they'll have to change something. I don't use Linux because it's free, I use it because it works well for what I need, costs $0, and is easy to maintain. -- http://mail.python.org/mailman/listinfo/python-list
Re: Encoding problem when launching Python27 via DOS
On 10/04/2011 13:22, Jean-Pierre M wrote: I created a simple program which writes in a unicode files some french text with accents! [snip] This line: l.p(premier message de Log à accents) passes a bytestring to the method, and inside the method, this line: unicode_str=u'%s : %s \n' % (date_stamp,msg.encode(self.charset_log,'replace')) it tries to encode the bytestring to Unicode. It's not possible to encode a bytestring, only a Unicode string, so Python tries to decode the bytestring using the fallback encoding (ASCII) and then encode the result. Unfortunately, the bytestring isn't ASCII (it contains accented characters), so it can't be decoded as ASCII, hence the exception. BTW, it's probably better to forget about cp1252, etc, and use UTF-8 instead, and also to use Unicode wherever possible. -- http://mail.python.org/mailman/listinfo/python-list
Re: OOP only in modules
newpyth newp...@gmail.com writes: [...] My main goal is to arrange OO in a paradigmatic manner in order to apply to it the procedural scheme. especially to the caller or called modules. Bye. I have some troubles understanding what you mean. Can you write an example of code that it's for you annoying and how would you like to write instead? About the OO criticism, if they make sense, they don't really apply to python, but more to languages like java. Python gives you enough freedom to be procedural or functional. -- http://mail.python.org/mailman/listinfo/python-list
Re: Argument of the bool function
On 10-Apr-11 12:21 PM, Mel wrote: Chris Angelico wrote: Who would use keyword arguments with a function that takes only one arg anyway? It's hard to imagine. Maybe somebody trying to generalize function calls (trying to interpret some other language using a python program?) # e.g. input winds up having the effect of .. function = bool name = 'x' value = 'the well at the end of the world' ## ... actions.append ((function, {name:value})) ## ... for function, args in actions: results.append (function (**args)) Not something I, for one, do every day. But regularity in a language is good when you can get it, especially for abstract things like that. I can sort of guess that `dir` was perhaps coded in C for speed and doesn't spend time looking for complicated argument lists. Python is a pragmatic language, so all the rules come pre-broken. Mel. This thread has lasted 3 days so far. I presume that it is agreed they the following is a satisfactory outcome: *** Python 2.7.1 (r271:86832, Nov 27 2010, 18:30:46) [MSC v.1500 32 bit (Intel)] on win32. *** bool(x=0) False bool(x=1) True Colin W. -- http://mail.python.org/mailman/listinfo/python-list
Re: Retrieving Python Keywords
On 4/10/2011 5:12 AM, candide wrote: Le 10/04/2011 04:01, Terry Reedy a écrit : Yes. (Look in the manuals, I did : my main reference book is the Martelli's /Python in a Nutshell/ You should only use that as a supplement. and the index doesn't refer to the keyword import and now you know why ;-). I meant the fine, heavily edited and constantly improved by 20+ people manuals that come with Python. The Global Module Index has one entry under K -- the keyword module. The General Index has multiple entries for 'keyword', including 'keyword(module)'. or try the obvious imports ;-) I meant, 'import keyword' or 'import keywords'. Sorry, I guess perhaps not so obvious if one is not used to Python's extreme introspection features. -- Terry Jan Reedy -- http://mail.python.org/mailman/listinfo/python-list
Python program termination and exception catching
Hello everyone, This may sound like a bit of a strange desire, but I want to change the way in which a python program quits if an exception is not caught. The program has many different classes of exceptions (for clarity purposes), and they're raised whenever something goes wrong. Most I want to be fatal, but others I'd want to catch and deal with. Is there any way to control Python's default exit strategy when it hits an uncaught exception (for instance, call another function that exits differently)? An obvious way is to just catch every exception and manually call that function, but then I fill up my script with trys and excepts which hurts readability (and makes the code uglier) and quashes tracebacks; neither of which I want to do. Any thoughts? Thanks! Jason -- http://mail.python.org/mailman/listinfo/python-list
Re: Python program termination and exception catching
2011.04.10. 21:25 keltezéssel, Jason Swails írta: Hello everyone, This may sound like a bit of a strange desire, but I want to change the way in which a python program quits if an exception is not caught. The program has many different classes of exceptions (for clarity purposes), and they're raised whenever something goes wrong. Most I want to be fatal, but others I'd want to catch and deal with. Well, the application quits when all of it threads are ended. Do you want to catch those exception only in the last threads? Or do you want to do it in all threads? Or just the main thread? Is there any way to control Python's default exit strategy when it hits an uncaught exception (for instance, call another function that exits differently)? You can try the atexit module. But I think this is not what you want. I don't think that exceptions can be intercepted in unusual ways. There should be only one obvious way to do it. Can you please write more about why isn't it good to use try /except? An obvious way is to just catch every exception and manually call that function, but then I fill up my script with trys and excepts which hurts readability (and makes the code uglier) and quashes tracebacks; neither of which I want to do. Okay, so maybe I misunderstood. You where talking about changing the way in which a python program quits. Can you please come up with an example and explain your problem in more detail? -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. -- http://mail.python.org/mailman/listinfo/python-list
Re: Python program termination and exception catching
On Sun, Apr 10, 2011 at 12:34 PM, Laszlo Nagy gand...@shopzeus.com wrote: 2011.04.10. 21:25 keltezéssel, Jason Swails írta: Hello everyone, This may sound like a bit of a strange desire, but I want to change the way in which a python program quits if an exception is not caught. The program has many different classes of exceptions (for clarity purposes), and they're raised whenever something goes wrong. Most I want to be fatal, but others I'd want to catch and deal with. Well, the application quits when all of it threads are ended. Do you want to catch those exception only in the last threads? Or do you want to do it in all threads? Or just the main thread? The problem here is that the threads in this case are MPI threads, not threads spawned during execution. (mpi4py). I want exceptions to be fatal as they normally are, but it *must* call MPI's Abort function instead of just dying, because that will strand the rest of the processes and cause an infinite hang if there are any subsequent communication attempts. Hopefully this explains it more clearly? Thanks! Jason -- Jason M. Swails Quantum Theory Project, University of Florida Ph.D. Candidate 352-392-4032 -- http://mail.python.org/mailman/listinfo/python-list
Re: Python program termination and exception catching
On Sun, Apr 10, 2011 at 4:05 PM, Jason Swails jason.swa...@gmail.com wrote: On Sun, Apr 10, 2011 at 12:34 PM, Laszlo Nagy gand...@shopzeus.com wrote: 2011.04.10. 21:25 keltezéssel, Jason Swails írta: Hello everyone, This may sound like a bit of a strange desire, but I want to change the way in which a python program quits if an exception is not caught. The program has many different classes of exceptions (for clarity purposes), and they're raised whenever something goes wrong. Most I want to be fatal, but others I'd want to catch and deal with. Well, the application quits when all of it threads are ended. Do you want to catch those exception only in the last threads? Or do you want to do it in all threads? Or just the main thread? The problem here is that the threads in this case are MPI threads, not threads spawned during execution. (mpi4py). I want exceptions to be fatal as they normally are, but it *must* call MPI's Abort function instead of just dying, because that will strand the rest of the processes and cause an infinite hang if there are any subsequent communication attempts. Hopefully this explains it more clearly? Thanks! Jason Is there any reason you can't just surround the top level stuff in a try block? Assuming you have any structure at all to your program, there should be a single function call or two that triggers all the MPI stuff- just catch the exceptions up there. -- Jason M. Swails Quantum Theory Project, University of Florida Ph.D. Candidate 352-392-4032 -- http://mail.python.org/mailman/listinfo/python-list -- http://mail.python.org/mailman/listinfo/python-list
Re: Multiprocessing, shared memory vs. pickled copies
On 10 apr, 18:27, John Nagle na...@animats.com wrote: Unless you have a performance problem, don't bother with shared memory. If you have a performance problem, Python is probably the wrong tool for the job anyway. Then why does Python have a multiprocessing module? In my opinion, if Python has a multiprocessing module in the standard library, it should also be possible to use it with NumPy. Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: Multiprocessing, shared memory vs. pickled copies
On 8 apr, 03:10, sturlamolden sturlamol...@yahoo.no wrote: That was easy, 64-bit support for Windows is done :-) Now I'll just have to fix the Linux code, and figure out what to do with os._exit preventing clean-up on exit... :-( Now it feel dumb, it's not worse than monkey patching os._exit, which I should have realised since the machinery depends on monkey patching NumPy. I must have forgotten we're working with Python here, or I have been thinking too complex. Ok, 64-bit support for Linux is done too, and the memory leak is gone :-) Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: Python program termination and exception catching
On Sun, Apr 10, 2011 at 3:25 PM, Jason Swails jason.swa...@gmail.com wrote: Hello everyone, This may sound like a bit of a strange desire, but I want to change the way in which a python program quits if an exception is not caught. The program has many different classes of exceptions (for clarity purposes), and they're raised whenever something goes wrong. Most I want to be fatal, but others I'd want to catch and deal with. Is there any way to control Python's default exit strategy when it hits an uncaught exception (for instance, call another function that exits differently)? When an exception is raised and uncaught, the interpreter calls sys.excepthook. You can replace sys.excepthook with your own function. See http://docs.python.org/library/sys.html#sys.excepthook If your program is threaded, you may need to look at this bug: http://bugs.python.org/issue1230540. It describes a problem with replacing sys.excepthook when using the threading module, along with some workarounds. There's a simple example of replacing excepthook here: http://code.activestate.com/recipes/65287/ -- Jerry -- http://mail.python.org/mailman/listinfo/python-list
Re: Argument of the bool function
Mel mwil...@the-wire.com writes: Python is a pragmatic language, so all the rules come pre-broken. +1 QOTW -- \ “Science shows that belief in God is not only obsolete. It is | `\also incoherent.” —Victor J. Stenger, 2001 | _o__) | Ben Finney -- http://mail.python.org/mailman/listinfo/python-list
Re: [OT] Free software versus software idea patents
Steven D'Aprano wrote: What do you mean 'just like?They are nothing alike. All three of Python, Apache and Linux have accepted donations from Microsoft. Microsoft is a corporate sponsor of the PSF. Microsoft is not in the business of donating money and time to competitors out of the goodness of their heart. If Microsoft is giving them money or code, they must be getting something out of it. Not so fast there, Steve. If they [Microsoft] are paying anything (unsubstantiated, unknowable) to Python, Apache, or (Linux, whatever you mean by that term...) there are only two motives: 1) Embrace, Extend, Kill... 2) They are paying for interoperability (revealing their underlying proprietary frameworks) in exchange for not being left out in the cold when the rest of the world dumps their system and moves to a free society. Microsoft is between the proverbial rock and that hard place you always hear about. The problem with motive number (1) is that nobody cares for (nor needs) their extensions, and its very hard to kill *anything* that is GPLd or licensed under a GPL compatible license. There really isn't a lot they can do... except threaten litigation or to use a front to threaten litigation. The deal with motive number (2) is that there are fewer and fewer teams who are concerned with interoperability. For instance (my team), we moved our stuff to gnulinux based systems and dumped Microsoft completely... we have no need for them at all (they're dead). The Linux Foundation president made a splash the other day by saying that bashing Microsoft was like kicking a puppy (the server cloud war is over, and Microsoft lost... big). The desktop is all that is left... and that is dying... rapidly. Their lockin is well entrenched (like Borg implants ) but the number of mom pops ( like my entire extended family, for instance) who are moving to Ubuntu (themselves) is astounding! It will not be long and Microsoft will die... and none too soon. All three projects actively collaborate with Microsoft from time to time, Again, this is not true. There is a huge difference between active collaboration, and accepting donations from Greeks bearing gifts... :) some more than others. .NET's IronPython is one of the big four Python implementations (the others being CPython, PyPy, and Jython), and actively supported by Microsoft. What's good for Python is good for IronPython and Microsoft. I'm not interested in any of those mentioned above, except Python (PSF). I don't support the others in any way. Period. Perhaps I should also have included Firefox and Thunderbird, which actively court Windows users and developers, sometimes at the expense of Linux users (e.g. the use of SQLite), Again, not true. What is happening with Firefox and Thunderbird is to provide the most common interfaces mom pops use (freely, actively, functionally, easily) so that they are freed from the Borg implant of IE. Then, when mom pop try the live cd from Ubuntu, or Gentoo, or BLAG, or gNewSense, whatever... they have interfaces that they are comfortable using. Clever,huh? None of this is at the expense of the gnulinux community... on the contrary, these tactics are for our benefit and must be supported; with time, attention, education, and yes, even our money. IE is dead. It is flat dead... almost nobody is using it... not even die-hard Windows gaming fanboys... we're on our way to freedom. Or Samba, which doesn't merely compete with Microsoft's SMB, but spreads it into the Unix world and legitimizes Microsoft's protocol among FOSS users. Samba is almost (almost) not necessary any longer either. Most shops today are supporting CUPS and the shops who provide Windows print servers are supporting the SMB interface so that all the Linux and Macs can still print... I'm thinking here mostly of the university setting (where I'm @ these days) where 9 out of 10 computers is a Mac Book and half of all remaining notebooks are running Ubuntu... again, this is not to legitimize Windows as a platform, but merely *in place* to expedite the rapid migration of users to *nix platforms. When finished, we turn off Samba and everyone is running CUPS... end of a sad sad story for Microsoft, and the dawning of a bright new day for freedom. You paint a very attractive picture of Good versus Evil, but real life is not as black and white as you make out. Real life is all I'm interested in... that is why I'm an activist, as well as a computer scientist; I care. Free software - Free society. The time is now, and the timing is critical. Microsoft and the representative oligarchy must die. We don't need them any longer and they have crippled the entire world for far too long Mono fails to live up to your extremely high standards of FOSS purity This is one of the few things you have said that I can agree with, sort-of; kind-of. Mono is deliberately acting
Re: [OT] Free software versus software idea patents
On Mon, Apr 11, 2011 at 10:04 AM, harrismh777 harrismh...@charter.net wrote: Not so fast there, Steve. If they [Microsoft] are paying anything (unsubstantiated, unknowable) to Python, Apache, or (Linux, whatever you mean by that term...) there are only two motives: http://www.python.org/psf/ - Microsoft is listed. I would assume one does not become a sponsor member without paying money (or a whole lot of something that translates to money, like staff hours). http://www.apache.org/foundation/thanks.html - again, Microsoft is listed (in the top category, Platinum Sponsors). Feel free to continue discussing the merits of these donations, but it is definitely substantiable and knowable. Microsoft has money, and they're prepared to spend it on what they believe in. (My view is that they believe in positive PR more than they believe in Python or Apache or whatever.) Chris Angelico -- http://mail.python.org/mailman/listinfo/python-list
Re: [OT] Free software versus software idea patents
Chris Angelico wrote: Not so fast there, Steve. If they [Microsoft] are paying anything (unsubstantiated, unknowable) to Python, Apache, or (Linux, whatever you mean by that term...) there are only two motives: http://www.python.org/psf/ - Microsoft is listed. http://www.apache.org/foundation/thanks.html - again, Microsoft is listed (My view is that they believe in positive PR more than they believe in Python or Apache or whatever.) Yes, my view as well. I should clarify... what I mean by unsubstantiated and unknowable is that from a sponsorship list alone it is not possible to determine (as an outsider) what real level the sponsorship has attained, nor what political force for influence or reciprocity has occurred. Microsoft has much to gain from playing well in what remains of their corner of the sandbox. But, talk about a day late and a dollar short... kind regards, m harris -- http://mail.python.org/mailman/listinfo/python-list
Re: Creating unit tests on the fly
In article e2a1efe6-17bb-4d94-8fcc-b812b41f6...@d28g2000yqf.googlegroups.com, Raymond Hettinger pyt...@rcn.com wrote: I think you're going to need a queue of tests, with your own test runner consuming the queue, and your on-the-fly test creator running as a producer thread. Writing your own test runner isn't difficult. 1) wait on the queue for a new test case. 2) invoke test_case.run() with a TestResult object to hold the result 3) accumulate or report the results 4) repeat forever. OK, this is working out pretty nicely. The main loop is shaping up to to be something like: def go(self): result = unittest.TestResult() while not self.queue.empty(): route, depth = self.queue.get() test_case = self.make_test_case(route) suite = unittest.defaultTestLoader. \ loadTestsFromTestCase(test_case) suite.run(result) if result.wasSuccessful(): print passed else: for case, trace in result.failures: print case.id() d = case.shortDescription() if d: print d print trace print '' It turns out there's really no reason to put the test runner in its own thread. Doing it all in one thread works fine; make_test_case() passes self.queue to the newly created TestCase as part of the class dict, and one of the test methods in my BaseSmokeTest pushes newly discovered route onto the queue. Perhaps not the most efficient way to do things, but since most of the clock time is spent waiting for the HTTP server to serve up a page, it doesn't matter, and this keeps it simple. Thanks for your help! PS: After having spent the last 6 years of my life up to my navel in C++, it's incredibly liberating to be creating classes on the fly in user code :-) -- http://mail.python.org/mailman/listinfo/python-list
Re: [OT] Free software versus software idea patents
Chris Angelico wrote: All software can be expressed as lambda calculus. The point being, all software is mathematics... With enough software, you can simulate anything. That means that the entire universe can be expressed as lambda calculus. Does that mean that nothing can ever be patented, because it's all just mathematics? Great question... the simple answer is, no. But the extended answer is a little complicated and not well understood by most folks, so its worth talking about, at least a lot. You may skip to the last paragraph for the main point... or stay tuned for the explanation. Mathematical processes and algorithms are not patentable (by rule) because they are 'natural' and 'obvious'. In other words, a natural set of laws (mathematics, just one example) are universally used naturally and obviously by all humans in the course of thinking, creating, expressing, etc., and therefore these ideas are not patentable because they are the natural and obvious 'stuff' from which and through which the human mind processes the natural world. You cannot patent the Pythagorean theorem. You cannot patent addition, nor subtraction, nor the logical concepts for boolean algebra nor can you patent lambda calculus. These are just examples. You cannot patent the mathematical concept of nand gate; however, Motorola may patent the mechanical electrical implementation of the nand gate (CMOS 4011 quad nand). Also, Texas Instruments may patent their mechanical electrical implementation of the nand gate concept (TTL sn7400n quad chip). The chips are patentable, but the mathematical concept 'behind' the chips is not patentable. Software is another sort of animal entirely. Because software is not just based on mathematics--- IT IS mathematics. In other words, software is not more and not less than an extension of the human mind 'itself' in mathematical and logical expression across symbol extended through a machine framework. All humans naturally and obviously process the world around them (regardless of race, creed, ethnic origin, color, geography, %etc) through input, control, arithmetic, logic, and output. Software running in a machine is nothing more nor less than software running in my own mind, or in your own mind. The machine (as a logical and mathematical symbol processor) sorts out world realities (modalities) via input, control, arithmetic, logic, and output. All software is the 'stuff' of natural and obvious human thought, creativity, expression, communication, etc. Granted, humans also have emotions and will, but for simplicity sake we will oversimplify. What has happened in the 20th century is that (to take natural language as an example) one person holds the patent on the verb. Another holds the patent on the noun. These two people cross-license their patents on the verbs and nouns, forcing the people who want to communicate with nouns and verbs to pay royalties, or to purchase noun and verb licenses, or to be forced to communicate without nouns and verbs. --- wouldn't that be interesting. To put it in simple comp sci terms (input, control, arithmetic, logic, and output) it works something like this: One corporation holds the patent on input(), and another corporation holds the patent on output(). Now then, all natural people need to use both input() and output() for communicating ( listening and speaking ) within the natural framework for their human endeavors, and yet they are forced by law to pay for a license to be able to use input() and output()... ... their control, arithmetic and logic are not much good to them otherwise. The holders of the input() and output() patents of course cross license so that they are held harmless and so that they alone (the oligarchy) control what is input() and output() and by whom. Needless to say, this is not a good thing. Software idea patents lay ownership to *so called* intellectual property. In other words, those who wield software idea patents believe that they hold ownership over the natural and obvious 'stuff' of human thinking, creating, organizing, sorting, communicating, etc. This is nothing other than an intellectual slave trade. The slave trade of human bodily bondage (which still goes on unpunished today, but waning) holds that persons may be owned. The slave trade of the late 20th and early 21st centuries holds that person's thoughts and thinking processes may be owned! This intellectual slave trade must end, now! Think about this... if I tell you something of substance little or great and it gives you pause for thought, then I have placed my thoughts (my process, my programming, my intellectual property) into your head... you are now thinking things that I myself have thought about prior to you. Now then, I lay hold to the concept that I now own those thoughts (it is my intellectual property you are now thinking of) and you therefore are bound (by
Re: [OT] Free software versus software idea patents
On Mon, 2011-04-11 at 10:18 +1000, Chris Angelico wrote: On Mon, Apr 11, 2011 at 10:04 AM, harrismh777 harrismh...@charter.net wrote: Not so fast there, Steve. If they [Microsoft] are paying anything (unsubstantiated, unknowable) to Python, Apache, or (Linux, whatever you mean by that term...) there are only two motives: http://www.python.org/psf/ - Microsoft is listed. I would assume one does not become a sponsor member without paying money (or a whole lot of something that translates to money, like staff hours). http://www.apache.org/foundation/thanks.html - again, Microsoft is listed (in the top category, Platinum Sponsors). Feel free to continue discussing the merits of these donations, but it is definitely substantiable and knowable. Microsoft has money, and they're prepared to spend it on what they believe in. (My view is that they believe in positive PR more than they believe in Python or Apache or whatever.) Chris Angelico I'm sure they're using both products for in-house stuff. Who isn't? ^_^ -- http://mail.python.org/mailman/listinfo/python-list
Have a nice day.
Sale Sale Sale. Just visit the following link to buy computer accessories. You can buy computer accessories on reasonable prices. just visit http://www.onlineshoppingpk.blogspot.com -- http://mail.python.org/mailman/listinfo/python-list
Re: Python program termination and exception catching
On Sun, Apr 10, 2011 at 4:49 PM, Jerry Hill malaclyp...@gmail.com wrote: On Sun, Apr 10, 2011 at 3:25 PM, Jason Swails jason.swa...@gmail.com wrote: Hello everyone, This may sound like a bit of a strange desire, but I want to change the way in which a python program quits if an exception is not caught. The program has many different classes of exceptions (for clarity purposes), and they're raised whenever something goes wrong. Most I want to be fatal, but others I'd want to catch and deal with. Is there any way to control Python's default exit strategy when it hits an uncaught exception (for instance, call another function that exits differently)? When an exception is raised and uncaught, the interpreter calls sys.excepthook. You can replace sys.excepthook with your own function. See http://docs.python.org/library/sys.html#sys.excepthook This is exactly what I was looking for. Thank you! I can just redefine sys.excepthook to call MPI's Abort function and print the Tracebacks; exactly what I wanted. MPI threading doesn't work in the same way as, for instance, the threading modules in Python's stdlib. It doesn't spawn additional threads from some 'main' thread. Instead, all of the threads are launched simultaneously at the beginning and run the same program, dividing the workload based on their rank, so I think my application is immune to this bug. Thanks again! Jason If your program is threaded, you may need to look at this bug: http://bugs.python.org/issue1230540. It describes a problem with replacing sys.excepthook when using the threading module, along with some workarounds. There's a simple example of replacing excepthook here: http://code.activestate.com/recipes/65287/ -- http://mail.python.org/mailman/listinfo/python-list
[issue11818] tempfile.TemporaryFile example in docs doesnt work
New submission from eduardo schettin...@gmail.com: From the example: http://docs.python.org/py3k/library/tempfile.html#examples The error message is weird... but I guess the problem is the default mode 'w+b'. Python 3.3a0 (default:78a66c98288d, Apr 9 2011, 16:13:31) [GCC 4.4.5] on linux2 Type help, copyright, credits or license for more information. import tempfile fp = tempfile.TemporaryFile() fp.write('hello') Traceback (most recent call last): File stdin, line 1, in module TypeError: 'str' does not support the buffer interface fp2 = tempfile.TemporaryFile('w+') fp2.write('hello') 5 -- assignee: docs@python components: Documentation, Library (Lib) messages: 133447 nosy: docs@python, schettino72 priority: normal severity: normal status: open title: tempfile.TemporaryFile example in docs doesnt work versions: Python 3.2 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11818 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11810] _socket fails to build on OpenIndiana
Carl Brewer c...@bl.echidna.id.au added the comment: I know this is closed etc... but Plone (the CMS I use) is tied to various versions of Python, in particular 2.6 at this time. Having it not build on Open[Solaris/Indiana] means I can't install current versions of Plone/Zope on this platform. Any chance it could be fixed? -- nosy: +Bleve ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11810 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11819] 'unittest -m' should not pretend it works on Python 2.5/2.6
New submission from anatoly techtonik techto...@gmail.com: The following command is broken on Python 2.5/2.6 python -m unittest test_file It outputs -- Ran 0 tests in 0.000s OK But in Python 2.7 the same command works -- Ran 1 tests in 0.000s OK It is even more confusing with test class method on command line: python26 -m unittest test_file.SomeTest Traceback (most recent call last): ... File C:\~env\Python26\lib\unittest.py, line 598, in loadTestsFromName test = obj() File C:\~env\Python26\lib\unittest.py, line 216, in __init__ (self.__class__, methodName) ValueError: no such test method in class 'test_file.SomeTest': runTest --- I know that our ... policy denies backporting such fixes to Python 2.5/2.6, but such things that make an illusion that they work while in fact they never did - see #6514, make Python really suxx. I can feel user frustration while trying to maintain 2.6 compatibility and wasting time trying to run test suite. I wouldn't mind if `-m unittest` won't work in non-supported versions, but it should at least point to bug report. (if I'll ever switch to Ruby - this one will definitely be in the list reasons) -- components: Tests messages: 133449 nosy: techtonik priority: normal severity: normal status: open title: 'unittest -m' should not pretend it works on Python 2.5/2.6 type: behavior versions: Python 2.5, Python 2.6 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11819 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11819] 'unittest -m' should not pretend it works on Python 2.5/2.6
Changes by Ezio Melotti ezio.melo...@gmail.com: -- nosy: +ezio.melotti, michael.foord ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11819 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11818] tempfile.TemporaryFile example in docs doesnt work
Roundup Robot devnull@devnull added the comment: New changeset 87d89f767b23 by Ross Lagerwall in branch '3.2': Issue #11818: Fix tempfile examples for Python 3. http://hg.python.org/cpython/rev/87d89f767b23 -- nosy: +python-dev ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11818 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11818] tempfile.TemporaryFile example in docs doesnt work
Ross Lagerwall rosslagerw...@gmail.com added the comment: Fixed the examples for Python 3. It writes and reads bytes now. Also fixed the old Python 2 print statement. -- assignee: docs@python - rosslagerwall nosy: +rosslagerwall resolution: - fixed status: open - closed versions: +Python 3.3 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11818 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11820] idle3 shell os.system swallows shell command output
New submission from kent fuzzba...@comcast.net: attempting to run an os.system command under the idle 3 shell swallows the out put. Idle 3 is running on a 32 bit kde mandriva linux. import os os.system('ls') 0 os.system('pwd') 0 as you can see it returns a 0 indicating successful completion, but no output. However os.getcwd works perfectly. os.getcwd() '/home/kent/Documents' running the same code from python in an xwindow terminal works fine. apparently the idle shell does not echo the the standard output or error output as the python interpreter does. -- components: IDLE messages: 133452 nosy: Thekent priority: normal severity: normal status: open title: idle3 shell os.system swallows shell command output type: behavior versions: Python 3.1 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11820 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11820] idle3 shell os.system swallows shell command output
kent fuzzba...@comcast.net added the comment: running it as a file from idle gives the same result. import os print (os.system('pwd')) 0 -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11820 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11810] _socket fails to build on OpenIndiana
Antoine Pitrou pit...@free.fr added the comment: I know this is closed etc... but Plone (the CMS I use) is tied to various versions of Python, in particular 2.6 at this time. Having it not build on Open[Solaris/Indiana] means I can't install current versions of Plone/Zope on this platform. Any chance it could be fixed? I would be surprised if Plone/Zope didn't work on 2.7 by the time. Perhaps you want to ask their mailing-lists. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11810 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11810] _socket fails to build on OpenIndiana
Carl Brewer c...@bl.echidna.id.au added the comment: Plone ships with a universal installer which expects particular versions of python (and PIL etc etc) which makes it easy to build on, for example, many Linux distros, but it's just not working on Open[Solaris|Indiana] and also NetBSD (pkgsrc's python2.6 is broken too, but we're working on that). The only time the installer gets bumped is when new versions of Plone get released, which means that only the bleeding edge might work. This is a problem for many integrators who are tied to older versions of Plone|Zope that are unlikely to get migrated to more recent releases in any sort of a reasonable timeframe. Is it really not possible to fix up python2.6 to solve this issue? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11810 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11810] _socket fails to build on OpenIndiana
Changes by Antoine Pitrou pit...@free.fr: -- nosy: +barry ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11810 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11819] 'unittest -m' should not pretend it works on Python 2.5/2.6
Amaury Forgeot d'Arc amaur...@gmail.com added the comment: Isn't this an exact duplicate of issue6514? Or do you suggest something else? -- nosy: +amaury.forgeotdarc ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11819 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11821] smtplib should provide a means to validate a remote server ssl certificate(s)
New submission from david db.pub.m...@gmail.com: (This is similar to http://bugs.python.org/issue10274) The smtplib module should provide a means to validate a remote server ssl certificate(s). It would be 'nice' if smtplib.SMTP_SSL smtplib.starttls took in arguments to validate the remote SMTP's ssl certificate has been signed by a trusted certificate authority(and the common name matches what it should etc.). -- messages: 133457 nosy: db priority: normal severity: normal status: open title: smtplib should provide a means to validate a remote server ssl certificate(s) ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11821 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8809] smtplib should support SSL contexts
Changes by Antoine Pitrou pit...@free.fr: -- title: smptlib should support SSL contexts - smtplib should support SSL contexts ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8809 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11821] smtplib should provide a means to validate a remote server ssl certificate(s)
Antoine Pitrou pit...@free.fr added the comment: Duplicate of issue11821. -- nosy: +pitrou resolution: - duplicate status: open - closed superseder: - smtplib should provide a means to validate a remote server ssl certificate(s) ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11821 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11821] smtplib should provide a means to validate a remote server ssl certificate(s)
Antoine Pitrou pit...@free.fr added the comment: Oops, I meant issue8809. -- superseder: smtplib should provide a means to validate a remote server ssl certificate(s) - smtplib should support SSL contexts ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11821 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue6514] python -m unittest testmodule does not run any tests
Changes by Ezio Melotti ezio.melo...@gmail.com: -- nosy: +ezio.melotti ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue6514 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11819] 'unittest -m' should not pretend it works on Python 2.5/2.6
Georg Brandl ge...@python.org added the comment: Yes, this is a duplicate. -- nosy: +georg.brandl resolution: - duplicate status: open - closed superseder: - python -m unittest testmodule does not run any tests ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11819 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2650] re.escape should not escape underscore
Roundup Robot devnull@devnull added the comment: New changeset dda33191f7f5 by Ezio Melotti in branch 'default': #2650: re.escape() no longer escapes the _. http://hg.python.org/cpython/rev/dda33191f7f5 -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2650 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2650] re.escape should not escape underscore
Changes by Ezio Melotti ezio.melo...@gmail.com: -- resolution: - fixed stage: needs patch - committed/rejected status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2650 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8809] smtplib should support SSL contexts
Changes by david db.pub.m...@gmail.com: -- nosy: +db ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8809 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8428] buildbot: test_multiprocessing timeout (test_notify_all? test_pool_worker_lifetime?)
Charles-Francois Natali neolo...@free.fr added the comment: I think those lockups are due to a race in the Pool shutdown code. In Lib/multiprocessing/pool.py: def close(self): debug('closing pool') if self._state == RUN: self._state = CLOSE self._worker_handler._state = CLOSE self._taskqueue.put(None) We set the current state to CLOSE, and send None to the taskqueue, so that task_handler detects that we want to shut down the queue and sends None (sentinel) to the inqueue for each worker process. When a worker process receives this sentinel, it exists, and when Pool's join method is called, each process is joined successfully. Now, there's a problem, because of the worker_hanler thread. This thread constantly starts new threads if existing one exited after having completed their work: def _handle_workers(pool): while pool._worker_handler._state == RUN and pool._state == RUN: pool._maintain_pool() time.sleep(0.1) debug('worker handler exiting') where def _maintain_pool(self): Clean up any exited workers and start replacements for them. if self._join_exited_workers(): self._repopulate_pool() Imagine the following happens: worker_handler checks that the pool is still running (state == RUN), but before calling maintain_pool, it's preempted (releasal of the GIL), and Pool's close() methode is called : state is set to CLOSE, None is put to taskqueue, and worker threads exit. Then, Pool's join is called: def join(self): debug('joining pool') assert self._state in (CLOSE, TERMINATE) self._worker_handler.join() self._task_handler.join() self._result_handler.join() for p in self._pool: p.join() this blocks until worker_handler exits. This thread sooner or later resumes and calls maintain_pool. maintain_pool calls repopulate_pool, which recreates new worker threads/processes. Then, worker_handler checks the current state, sees CLOSE, and exists. Then, Pool's join blocks there: for p in self._pool: p.join() since the newly created processes never receive the sentinels (already consumed by the previous worker processes)... This race can be reproduced almost every time by just adding: def _handle_workers(pool): while pool._worker_handler._state == RUN and pool._state == RUN: +time.sleep(1) pool._maintain_pool() time.sleep(0.1) debug('worker handler exiting') Then something as simple as this will block: p = multiprocessing.Pool(3) p.close() p.join() I still have to think of a clean way to solve this. -- nosy: +neologix ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8428 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8809] smtplib should support SSL contexts
Thomas Scrace t...@scrace.org added the comment: Is anybody working on this issue? If not, I think it looks like it might be a nice one for me to tackle. I'll go ahead unless there are any objections. -- nosy: +thomas.scrace ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8809 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11816] Add functions to return disassembly as string
Nick Coghlan ncogh...@gmail.com added the comment: I really like the idea of adding some lower level infrastructure to dis to make it generator based, making the disassembly more amenable to programmatic manipulation. Consider if, for each line disassemble() currently prints, we had an underlying iterator that yielded a named tuple consisting of (index, opcode, oparg, linestart, details). I've created a proof-of-concept for that in my sandbox (http://hg.python.org/sandbox/ncoghlan/file/get_opinfo/Lib/dis.py) which adds a get_opinfo() function that does exactly. With disassemble() rewritten to use that, test_dis and test_peepholer still pass as currently written. Near-term, test_peepholer could easily continue to do what it does now (i.e. use the higher level dis() function and redirect sys.stdout). Longer term, it could be written to analyse the opcode stream instead of doing string comparisons. -- hgrepos: +17 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11816 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11816] Refactor the dis module to provide better building blocks for bytecode analysis
Nick Coghlan ncogh...@gmail.com added the comment: Changed issue title to cover ideas like get_opinfo(). -- title: Add functions to return disassembly as string - Refactor the dis module to provide better building blocks for bytecode analysis ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11816 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11816] Refactor the dis module to provide better building blocks for bytecode analysis
Nick Coghlan ncogh...@gmail.com added the comment: Oops, I forgot to edit my comment to match the OpInfo definition I used in the proof-of-concept: OpInfo = collections.namedtuple(OpInfo, opindex opcode opname oparg details starts_line is_jump_target) -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11816 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11816] Refactor the dis module to provide better building blocks for bytecode analysis
Eugene Toder elto...@gmail.com added the comment: So in the near term, dis-based tests should continue to copy/paste sys.stdout redirection code? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11816 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11817] berkeley db 5.1 support
R. David Murray rdmur...@bitdance.com added the comment: Python 2.7 is closed for new features, I afraid. And Berkeley DB is not included in the Python3 stdlib. It has reverted to being maintained entirely as a third party package. -- nosy: +r.david.murray resolution: - rejected stage: - committed/rejected status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11817 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11802] filecmp.cmp needs a documented way to clear cache
Nadeem Vawda nadeem.va...@gmail.com added the comment: Georg? Benjamin? Do you think this fix should be backported? -- nosy: +benjamin.peterson, georg.brandl ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11802 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11816] Refactor the dis module to provide better building blocks for bytecode analysis
Nick Coghlan ncogh...@gmail.com added the comment: If we decide our long term goal is the use of the opcode stream for programmatic access, then yes. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11816 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11807] Documentation of add_subparsers lacks information about parametres
Filip Gruszczyński grusz...@gmail.com added the comment: Here is a patch for this. I am not much of technical writer, so please be patient with me. I tried to provide all the information about parameters, that can be inferred from the code and experimenting. I have left out one parameter - action - because I don't see any use of it for a potential user and potential description seemed very complicated. I'll be happy to work further on the patch, if someone is willing to tutor me a little. -- keywords: +patch Added file: http://bugs.python.org/file21604/11807.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11807 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11610] Improving property to accept abstract methods
Darren Dale dsdal...@gmail.com added the comment: So, are there objections to this patch, or can it be merged? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11610 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11772] email header wrapping edge case failure
Changes by R. David Murray rdmur...@bitdance.com: -- resolution: - duplicate stage: needs patch - committed/rejected status: open - closed superseder: - email.header.Header doesn't fold headers correctly ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11772 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8809] smtplib should support SSL contexts
Antoine Pitrou pit...@free.fr added the comment: Is anybody working on this issue? If not, I think it looks like it might be a nice one for me to tackle. I'll go ahead unless there are any objections. Nobody is working on it AFAIK. Feel free to give it a try :) -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8809 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11650] Faulty RESTART/EINTR handling in Parser/myreadline.c
Steffen Daode Nurpmeso sdao...@googlemail.com added the comment: On Sat, Apr 09, 2011 at 02:18:01PM +, STINNER Victor wrote: I noticied a strange behaviour: Still fun, but this one could even make it except for termios flags, multibyte and the real problem, signal handling. Hm. -- Added file: http://bugs.python.org/file21605/11650.termios-1.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11650 ___diff --git a/Parser/myreadline.c b/Parser/myreadline.c --- a/Parser/myreadline.c +++ b/Parser/myreadline.c @@ -10,6 +10,10 @@ */ #include Python.h +#ifdef Py_PYPORT_H +# define __USE_TERMIOS +# include signal.h +#endif #ifdef MS_WINDOWS #define WIN32_LEAN_AND_MEAN #include windows.h @@ -19,6 +23,18 @@ extern char* vms__StdioReadline(FILE *sys_stdin, FILE *sys_stdout, char *prompt); #endif +typedef struct Args { +char*input; +FILE*fp; +int tios_fd; +int tios_is_init; +#ifdef __USE_TERMIOS +int tios_is_set; +int __align; +struct termios tios_old; +struct termios tios_new; +#endif +} Args; PyThreadState* _PyOS_ReadlineTState; @@ -29,117 +45,230 @@ int (*PyOS_InputHook)(void) = NULL; -/* This function restarts a fgets() after an EINTR error occurred - except if PyOS_InterruptOccurred() returns true. */ +/* This function restarts a fgetc() after an EINTR error occurred + * except if PyOS_InterruptOccurred() returns true */ +static int my_fgets(Args *args); +#ifdef __USE_TERMIOS +static void termios_resume(Args *args); +static void termios_suspend(Args *args); +#endif + +#ifdef __USE_TERMIOS +static void +termios_resume(Args *args) +{ +if (!args-tios_is_init) { +args-tios_is_init = 1; + +while (tcgetattr(args-tios_fd, args-tios_old) != 0) +if (errno != EINTR) { +args-tios_fd = -1; +goto jleave; +} + +memcpy(args-tios_new, args-tios_old, sizeof(args-tios_old)); +args-tios_new.c_lflag = ~(/*ECHOCTL |*/ ICANON); +args-tios_new.c_cc[VMIN] = 1; +} + +if (args-tios_fd 0) +goto jleave; + +while (tcsetattr(args-tios_fd, TCSAFLUSH, args-tios_new) != 0) +; +args-tios_is_set = 1; + +jleave: +return; +} + +static void +termios_suspend(Args *args) +{ +if (args-tios_is_init args-tios_is_set) { +while (tcsetattr(args-tios_fd, TCSANOW, args-tios_old) != 0) +; +args-tios_is_set = 0; +} +return; +} +#endif static int -my_fgets(char *buf, int len, FILE *fp) +my_fgets(Args *args) { -char *p; +int estat; +char *buf, *cursor; +size_t buf_len; + +buf = (char*)PyMem_MALLOC(2*80); +estat = 1; +if (buf == NULL) +goto jreturn; + +cursor = buf; +buf_len = 2*80 - 2; +jrestart_input: +estat = 0; + +if (PyOS_InputHook != NULL) +(void)(PyOS_InputHook)(); +#ifdef __USE_TERMIOS +termios_resume(args); +#endif + +/* Fetch bytes until error or newline */ +errno = 0; while (1) { -if (PyOS_InputHook != NULL) -(void)(PyOS_InputHook)(); -errno = 0; -p = fgets(buf, len, fp); -if (p != NULL) -return 0; /* No error */ +int c = fgetc(args-fp); +#ifdef __USE_TERMIOS +if (!isprint(c)) +switch (c) { +case '\x04': +c = EOF; +/* FALLTHROUGH */ +default: +break; +case '\x03': +estat = SIGINT; +goto j_sigit; +case '\x1A': +estat = SIGTSTP; +goto j_sigit; +case '\x1C': +estat = SIGQUIT; +/* FALLTHROUGH */ +j_sigit:termios_suspend(args); +kill(getpid(), estat); +errno = EINTR; +goto jcheck_fail; +} +#endif +if (c == EOF) +goto jcheck_fail; +*(cursor++) = (char)c; +if (c == '\n') +break; + +if ((size_t)(cursor-buf) = buf_len) { +buf_len += 2+32; +cursor = buf = (char*)PyMem_REALLOC(buf, buf_len); +if (buf == NULL) { +estat = 1; +goto jreturn; +} +buf_len -= 2+32; +cursor += buf_len; +buf_len += 32; +} +} + +*cursor = '\0'; +args-input = buf; +jreturn: +#ifdef __USE_TERMIOS +termios_suspend(args); +#endif +return estat; + +jcheck_fail: #ifdef MS_WINDOWS -/* In the case of a Ctrl+C or some other external event - interrupting the operation: - Win2k/NT: ERROR_OPERATION_ABORTED is the most recent Win32 - error code (and feof() returns TRUE). - Win9x: Ctrl+C seems to have no effect on
[issue11700] mailbox.py proxy updates
Steffen Daode Nurpmeso sdao...@googlemail.com added the comment: I reviewed this. And moved a _PartialFile-only _read() case to _PartialFile where it belongs (*this* _ProxyFile will never be extended to stand alone so i shouldn't have moved that the other direction at all). -- Added file: http://bugs.python.org/file21606/11700.yeah-review.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11700 ___diff --git a/Lib/mailbox.py b/Lib/mailbox.py --- a/Lib/mailbox.py +++ b/Lib/mailbox.py @@ -1864,97 +1864,142 @@ Message with MMDF-specific properties. -class _ProxyFile: -A read-only wrapper of a file. +class _ProxyFile(io.BufferedIOBase): +A io.BufferedIOBase inheriting read-only wrapper for a seekable file. +It supports __iter__() and the context-manager protocol. + +def __init__(self, file, pos=None): +io.BufferedIOBase.__init__(self) +self._file = file +self._pos = file.tell() if pos is None else pos +self._close = True +self._is_open = True -def __init__(self, f, pos=None): -Initialize a _ProxyFile. -self._file = f -if pos is None: -self._pos = f.tell() +def _set_noclose(self): +Subclass hook - use to avoid closing internal file object. +self._close = False + +def _closed_check(self): +Raise ValueError if not open. +if not self._is_open: +raise ValueError('I/O operation on closed file') + +def close(self): +if self._close: +self._close = False +self._file.close() +del self._file +self._is_open = False + +@property +def closed(self): +return not self._is_open + +def flush(self): +# Not possible because it gets falsely called (issue 11700) +#raise io.UnsupportedOperation('flush') +pass + +def _read(self, size, read_method, readinto_arg=None): +if size is None or size 0: +size = -1 +self._file.seek(self._pos) +if not readinto_arg: +result = read_method(size) else: -self._pos = pos +result = read_method(readinto_arg) +if result len(readinto_arg): +del readinto_arg[result:] +self._pos = self._file.tell() +return result -def read(self, size=None): -Read bytes. +def readable(self): +self._closed_check() +return True + +def read(self, size=-1): +self._closed_check() +if size is None or size 0: +return self.readall() return self._read(size, self._file.read) -def read1(self, size=None): -Read bytes. +def read1(self, size=-1): +self._closed_check() +if size is None or size 0: +return b'' return self._read(size, self._file.read1) -def readline(self, size=None): -Read a line. +def readinto(self, by_arr): +self._closed_check() +return self._read(len(by_arr), self._file.readinto, by_arr) + +def readall(self): +self._closed_check() +self._file.seek(self._pos) +if hasattr(self._file, 'readall'): +result = self._file.readall() +else: +dl = [] +while 1: +i = self._file.read(8192) +if len(i) == 0: +break +dl.append(i) +result = b''.join(dl) +self._pos = self._file.tell() +return result + +def readline(self, size=-1): +self._closed_check() return self._read(size, self._file.readline) -def readlines(self, sizehint=None): -Read multiple lines. +def readlines(self, sizehint=-1): result = [] for line in self: result.append(line) -if sizehint is not None: +if sizehint = 0: sizehint -= len(line) if sizehint = 0: break return result +def seekable(self): +self._closed_check() +return True + +def seek(self, offset, whence=0): +self._closed_check() +if whence == 1: +self._file.seek(self._pos) +self._pos = self._file.seek(offset, whence) +return self._pos + +def tell(self): +self._closed_check() +return self._pos + +def writable(self): +self._closed_check() +return False + +def writelines(self, lines): +raise io.UnsupportedOperation('writelines') + +def write(self, b): +raise io.UnsupportedOperation('write') + def __iter__(self): -Iterate over lines. while True: line = self.readline() if not line: raise StopIteration yield line -def tell(self): -
[issue11492] email.header.Header doesn't fold headers correctly
R. David Murray rdmur...@bitdance.com added the comment: This was quite the adventure. The more I worked on fixing the tests, the more if/else cases the existing splitting algorithm grew. When I reached the point where fixing one test broke two others, I thought maybe it was time to try a different approach. Based on the knowledge gathered by banging my head on the old algorithm, I developed a new one. This one is more RFC2822/RFC5322 compliant, I believe. It breaks only at FWS, but still gives preference to breaking after commas or semicolons by default. I had to adjust several tests that tested broken behavior: the folded lines were longer than maxlen even though there were suitable fold points. I'm very happy with this patch because there are 70 fewer lines of code but the module passes more tests. Even though the code changes are extensive, I plan to apply this to 3.2. It fixes bugs, and the new code is at least somewhat easier to understand than the old code (if only because there is less of it!) I don't plan to apply it to 3.1 because one older test fails if the patch is applied and I don't understand why (it appears to have nothing to do with line wrapping, and the same test works fine in 3.2). -- stage: needs patch - patch review versions: -Python 3.1 Added file: http://bugs.python.org/file21607/better_header_spliter.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11492 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11822] Improve disassembly to show embedded code objects
New submission from Raymond Hettinger raymond.hettin...@gmail.com: Now that list comprehensions mask run their internals in code objects (the same way that genexps do), it is getting harder to use dis() to see what code is generated. For example, the pow() call isn't shown in the following disassembly: dis('[x**2 for x in range(3)]') 1 0 LOAD_CONST 0 (code object listcomp at 0x1005d1e88, file dis, line 1) 3 MAKE_FUNCTION0 6 LOAD_NAME0 (range) 9 LOAD_CONST 1 (3) 12 CALL_FUNCTION1 15 GET_ITER 16 CALL_FUNCTION1 19 RETURN_VALUE I propose that dis() build-up a queue undisplayed code objects and then disassemble each of those after the main disassembly is done (effectively making it recursive and displaying code objects in the order that they are first seen in the disassembly). For example, the output shown above would be followed by a disassembly of its internal code object: code object listcomp at 0x1005d1e88, file dis, line 1: 1 0 BUILD_LIST 0 3 LOAD_FAST0 (.0) 6 FOR_ITER16 (to 25) 9 STORE_FAST 1 (x) 12 LOAD_FAST1 (x) 15 LOAD_CONST 0 (2) 18 BINARY_POWER 19 LIST_APPEND 2 22 JUMP_ABSOLUTE6 25 RETURN_VALUE -- components: Library (Lib) messages: 133478 nosy: rhettinger priority: normal severity: normal status: open title: Improve disassembly to show embedded code objects type: feature request versions: Python 3.3 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11822 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11816] Refactor the dis module to provide better building blocks for bytecode analysis
Alex Gaynor alex.gay...@gmail.com added the comment: FWIW in PyPy we have https://bitbucket.org/pypy/pypy/src/default/lib_pypy/disassembler.py which we use for some of our tools. -- nosy: +alex ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11816 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11492] email.header.Header doesn't fold headers correctly
R. David Murray rdmur...@bitdance.com added the comment: Note that this fix solves issue 11772, so I've closed that one as a duplicate. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11492 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1602] windows console doesn't print or input Unicode
Changes by pyloz merlinschindlb...@googlemail.com: -- nosy: +smerlin ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue1602 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11823] disassembly needs to argument counts on calls with keyword args
New submission from Raymond Hettinger raymond.hettin...@gmail.com: The argument to CALL_FUNCTION is overloaded to show both the number of positional arguments and keyword arguments (shifted by 8-bits): dis(foo(10, opt=True)) 1 0 LOAD_NAME0 (foo) 3 LOAD_CONST 0 (10) 6 LOAD_CONST 1 ('opt') 9 LOAD_CONST 2 (True) 12 CALL_FUNCTION 257 15 RETURN_VALUE It is not obvious that the 257 argument causes three stack arguments to be popped. The disassembly should add a parenthetical to explain the composition: dis(foo(10, opt=True)) 1 0 LOAD_NAME0 (foo) 3 LOAD_CONST 0 (10) 6 LOAD_CONST 1 ('opt') 9 LOAD_CONST 2 (True) 12 CALL_FUNCTION 257 (1 positional, 1 keyword pair) 15 RETURN_VALUE -- components: Library (Lib) messages: 133481 nosy: rhettinger priority: normal severity: normal status: open title: disassembly needs to argument counts on calls with keyword args type: feature request versions: Python 3.3 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11823 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11823] disassembly needs to argument counts on calls with keyword args
Changes by Antoine Pitrou pit...@free.fr: -- keywords: +easy stage: - needs patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11823 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue9325] Add an option to pdb/trace/profile to run library module as a script
Greg Słodkowicz jerg...@gmail.com added the comment: Following Nick's advice, I extended runpy.run_module to accept an extra parameter to be used as replacement __main__ namespace. Having this, I can make this temporary __main__ accessible in main() in modules like trace/profile/pdb even if module execution fails with an exception. The problem is that it's visible only in the calling function but not in the global namespace. One way to make it accessible for post mortem debugging would be to create the replacement __main__ module in the global namespace and then pass as a parameter to main(), but this seems clumsy. So maybe the way to go is to have runpy store last used __main__, sys.exc_info() style. In this case, would this be the correct way to store it in runpy: try: import threading except ImportError: temp_main = None else: local_storage = threading.local() local_storage.temp_main = None temp_main = local_storage.temp_main ? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue9325 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11823] disassembly needs to argument counts on calls with keyword args
Changes by Daniel Urban urban.dani...@gmail.com: -- nosy: +durban ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11823 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8428] buildbot: test_multiprocessing timeout (test_notify_all? test_pool_worker_lifetime?)
Charles-Francois Natali neolo...@free.fr added the comment: Attached is a patch fixing this race, and a similar one in Pool's terminate. -- keywords: +patch Added file: http://bugs.python.org/file21608/pool_shutdown_race.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8428 ___diff -r bbfc65d05588 Lib/multiprocessing/pool.py --- a/Lib/multiprocessing/pool.py Thu Apr 07 10:48:29 2011 -0400 +++ b/Lib/multiprocessing/pool.py Sun Apr 10 23:52:22 2011 +0200 @@ -322,6 +322,8 @@ while pool._worker_handler._state == RUN and pool._state == RUN: pool._maintain_pool() time.sleep(0.1) +# send sentinel to stop workers +pool._taskqueue.put(None) debug('worker handler exiting') @staticmethod @@ -440,7 +442,6 @@ if self._state == RUN: self._state = CLOSE self._worker_handler._state = CLOSE -self._taskqueue.put(None) def terminate(self): debug('terminating pool') @@ -474,7 +475,6 @@ worker_handler._state = TERMINATE task_handler._state = TERMINATE -taskqueue.put(None) # sentinel debug('helping task handler/workers to finish') cls._help_stuff_finish(inqueue, task_handler, len(pool)) @@ -484,6 +484,11 @@ result_handler._state = TERMINATE outqueue.put(None) # sentinel +# we must wait for the worker handler to exit before terminating +# workers because we don't want workers to be restarted behind our back +debug('joining worker handler') +worker_handler.join() + # Terminate workers which haven't already finished. if pool and hasattr(pool[0], 'terminate'): debug('terminating workers') ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8428] buildbot: test_multiprocessing timeout (test_notify_all? test_pool_worker_lifetime?)
Antoine Pitrou pit...@free.fr added the comment: Nice! See also issue11814. -- nosy: +pitrou ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8428 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8428] buildbot: test_multiprocessing timeout (test_notify_all? test_pool_worker_lifetime?)
Roundup Robot devnull@devnull added the comment: New changeset d5e43afeede6 by Antoine Pitrou in branch '3.2': Issue #8428: Fix a race condition in multiprocessing.Pool when terminating http://hg.python.org/cpython/rev/d5e43afeede6 -- nosy: +python-dev ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8428 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11814] possible typo in multiprocessing.Pool._terminate
Roundup Robot devnull@devnull added the comment: New changeset c046b7e1087b by Antoine Pitrou in branch '3.2': Issue #11814: Fix likely typo in multiprocessing.Pool._terminate(). http://hg.python.org/cpython/rev/c046b7e1087b New changeset 76a3fc180ce0 by Antoine Pitrou in branch 'default': Merge from 3.2 (issue #11814, issue #8428) http://hg.python.org/cpython/rev/76a3fc180ce0 -- nosy: +python-dev ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11814 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8428] buildbot: test_multiprocessing timeout (test_notify_all? test_pool_worker_lifetime?)
Roundup Robot devnull@devnull added the comment: New changeset dfc61dc14f59 by Antoine Pitrou in branch '2.7': Issue #8428: Fix a race condition in multiprocessing.Pool when terminating http://hg.python.org/cpython/rev/dfc61dc14f59 -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8428 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8428] buildbot: test_multiprocessing timeout (test_notify_all? test_pool_worker_lifetime?)
Antoine Pitrou pit...@free.fr added the comment: Should be fixed now, thank you Charles-François. As for the TestCondition failure, there's a separate issue11790 open. (Victor, please don't file many bugs in a single issue!) -- resolution: - fixed stage: - committed/rejected status: open - closed versions: +Python 3.3 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8428 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11814] possible typo in multiprocessing.Pool._terminate
Antoine Pitrou pit...@free.fr added the comment: Fixed. The _terminate() issue has been fixed separately in issue8428. -- resolution: - fixed stage: - committed/rejected status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11814 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11816] Refactor the dis module to provide better building blocks for bytecode analysis
Changes by Raymond Hettinger raymond.hettin...@gmail.com: -- assignee: - rhettinger ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11816 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11747] unified_diff function product incorrect range information
Roundup Robot devnull@devnull added the comment: New changeset 36648097fcd4 by Raymond Hettinger in branch '3.2': Cleanup and modernize code prior to working on Issue 11747. http://hg.python.org/cpython/rev/36648097fcd4 New changeset 58a3bfcc70f7 by Raymond Hettinger in branch 'default': Cleanup and modernize code prior to working on Issue 11747. http://hg.python.org/cpython/rev/58a3bfcc70f7 -- nosy: +python-dev ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11747 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4877] xml.parsers.expat ParseFile() causes segmentation fault when passed a closed file object
Roundup Robot devnull@devnull added the comment: New changeset 28705a7987c5 by Ezio Melotti in branch '2.7': #4877: Fix a segfault in xml.parsers.expat while attempting to parse a closed file. http://hg.python.org/cpython/rev/28705a7987c5 -- nosy: +python-dev ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue4877 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4877] xml.parsers.expat ParseFile() causes segmentation fault when passed a closed file object
Ezio Melotti ezio.melo...@gmail.com added the comment: This is now fixed in 2.7, I also removed the unnecessary call to PyErr_Clear in ba699cf9bdbb (2.7), 6b4467e71872 (3.2), and 2d1d9759d3a4 (3.3). -- assignee: - ezio.melotti resolution: - fixed stage: commit review - committed/rejected status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue4877 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11820] idle3 shell os.system swallows shell command output
kent fuzzba...@comcast.net added the comment: When starting idle from a terminal the output from the command is sent to the terminal. When starting idle from the desktop, the output disappears except for the exit status. Same behavior with 2.65 -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11820 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com