wxpython-OGL fails to render objects with Python-3
I have a substantial wxpython-based application that I'm trying to port from python-2 to -3. Almost everything is working properly, except for a few small but important sections that use the OGL library. That executes without any exceptions, but the objects created within the diagram/canvas/panel are invisible! {sometimes they are visible for a small fraction of a second}. This occurs on Windows-10 and on Linux(Debian) systems. The colored background and buttons render just fine, the buttons do what they should, but no objects appear on the background (again, only with Py3, Py2 works properly). I have cut the code to a pretty minimum set and will attempt to paste it to the bottom of this message in the hope that it will encourage one of you to see what the problem might be. Alternatively if anyone knows of an example that works with Python-3 I'd be delighted to learn of it. Thanks for any insights!! # - # togl.py test 'OGL' shape system. from __future__ import print_function import sys if 2 == sys.version_info.major : import wxversion import wx import random import wx.lib.ogl as ogllib NOMSIZE=80 WINSIZE=400 N_ITEMS=7 shapeTypes= ( 'rect', 'circle', 'rndRect' ) class ShapeGraphicWindow(ogllib.ShapeCanvas): def __init__(self, parent): ogllib.ShapeCanvas.__init__(self, parent) self.diagram= ogllib.Diagram() self.SetDiagram(self.diagram) self.diagram.SetCanvas(self) self.SetBackgroundColour(wx.BLUE) def addShape(self, shape_type, title) : if 'rect' == shape_type : # rectangle shape= ogllib.RectangleShape(50, 35) brush= wx.Brush(wx.TheColourDatabase.Find("RED"), wx.SOLID) elif 'circle' == shape_type : # circle shape= ogllib.CircleShape(65) brush= wx.Brush(wx.TheColourDatabase.Find("YELLOW"), wx.SOLID) elif 'rndRect' == shape_type : # rounded-rectangle shape= ogllib.RectangleShape(45, 30) shape.SetCornerRadius(-0.3) brush= wx.Brush(wx.TheColourDatabase.Find("GOLDENROD"), wx.SOLID) else : raise AssertionError("Unable to add shape: %s : %s",(shape_type,title)) shape.SetBrush( brush ) x= int(random.uniform(NOMSIZE,WINSIZE-NOMSIZE)) shape.SetX(x) y= int(random.uniform(NOMSIZE,WINSIZE-NOMSIZE)) shape.SetY(y) shape.AddText(title) print("Draw",title,"at location ", (x,y), "on canvas of size", self.GetSize()) shape.SetCanvas(self) self.AddShape(shape) self.Refresh() shape.Show(True) return class TestPanel(wx.Panel): def __init__(self, frame) : wx.Panel.__init__(self, parent=frame) self.objcnts= (0,N_ITEMS) sz= wx.BoxSizer(wx.VERTICAL) hsz= wx.BoxSizer(wx.HORIZONTAL) btnq= wx.Button(self, -1, "Quit") btnq.Bind(wx.EVT_BUTTON, self.Quit) hsz.Add(btnq, 0, wx.ALL, 3) btnq= wx.Button(self, -1, "AutoTest") btnq.Bind(wx.EVT_BUTTON, self.AutoTest) hsz.Add(btnq, 0, wx.ALL, 3) sz.Add(hsz, 0, wx.ALIGN_LEFT) self.shp_graph_win= ShapeGraphicWindow(self) sz.Add(self.shp_graph_win, 2, wx.EXPAND) self.SetSizer(sz) #self.Layout() #self.Fit() self.SetAutoLayout(True) def mkTitle(self, index) : return ''.join([ chr(index + ord('A')), ":", str(index) ]) def AutoTest(self, event=None) : for j in range(*(self.objcnts)) : self.shp_graph_win.addShape(shapeTypes[j % len(shapeTypes)], self.mkTitle(j)) self.objcnts= (self.objcnts[1], self.objcnts[1]+N_ITEMS) def Quit(self, event) : self.Destroy() sys.exit(0) class TestFrame(wx.Frame): def __init__(self) : wx.Frame.__init__(self, None, -1, "test basic OGL functionality", size= wx.Size(WINSIZE,WINSIZE) ) TestPanel(self) self.Show(True) app = wx.App(False) frame = TestFrame() ogllib.OGLInitialize() app.MainLoop() sys.exit(0) -- https://mail.python.org/mailman/listinfo/python-list
Re: python3.7 installation failing - so why?
On Sat, 23 Feb 2019 14:56:03 +1100, Chris Angelico wrote: > On Sat, Feb 23, 2019 at 2:51 PM Frank Miles > wrote: >> >> I have a Debian/Linux machine that I just upgraded to the newer >> "testing" >> distribution. I'd done that earlier to another machine and all went >> well. With the latest machine, python2 is OK but python3 can barely >> run at all. For example: >> >> $ python3 Python 3.7.2+ (default, Feb 2 2019, 14:31:48) >> [GCC 8.2.0] on linux Type "help", "copyright", "credits" or "license" >> for more information. >> >>> help() >> Traceback (most recent call last): >> File "", line 1, in >> File "/usr/lib/python3.7/_sitebuiltins.py", line 102, in __call__ >> import pydoc >> File "/usr/lib/python3.7/pydoc.py", line 66, in >> import inspect >> File "/usr/lib/python3.7/inspect.py", line 40, in >> import linecache >> File "/usr/lib/python3.7/linecache.py", line 11, in >> import tokenize >> File "/usr/lib/python3.7/tokenize.py", line 33, in >> import re >> File "/usr/lib/python3.7/re.py", line 143, in >> class RegexFlag(enum.IntFlag): >> AttributeError: module 'enum' has no attribute 'IntFlag' >> >>> >> >>> >> Question: how can I determine what has gone wrong? > > Hmm. I'd start with: > > $ which python3 $ dpkg -S `which python3` > > and from inside Python: >>>> import sys; sys.path import enum; enum.__file__ > > My best guess at the moment is that your "enum" package is actually a > compatibility shim for earlier Python versions, less functional than the > one provided by Python 3.7. You may need to *uninstall* a shim package. > But I could well be wrong, and maybe there'd be a clue in your paths. > > ChrisA Whoopee! You nailed it! The path included /usr/local/lib/python3.7/dist-packages, which included an enum file as you suggested. The 'import enum; enum.__file__' (gonna have to look up that syntax) provided with the path to that directory. Many thanks Chris for a most helpful suggestion! -Frank -- https://mail.python.org/mailman/listinfo/python-list
python3.7 installation failing - so why?
I have a Debian/Linux machine that I just upgraded to the newer "testing" distribution. I'd done that earlier to another machine and all went well. With the latest machine, python2 is OK but python3 can barely run at all. For example: $ python3 Python 3.7.2+ (default, Feb 2 2019, 14:31:48) [GCC 8.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> help() Traceback (most recent call last): File "", line 1, in File "/usr/lib/python3.7/_sitebuiltins.py", line 102, in __call__ import pydoc File "/usr/lib/python3.7/pydoc.py", line 66, in import inspect File "/usr/lib/python3.7/inspect.py", line 40, in import linecache File "/usr/lib/python3.7/linecache.py", line 11, in import tokenize File "/usr/lib/python3.7/tokenize.py", line 33, in import re File "/usr/lib/python3.7/re.py", line 143, in class RegexFlag(enum.IntFlag): AttributeError: module 'enum' has no attribute 'IntFlag' >>> Question: how can I determine what has gone wrong? Reinstalling most of the packages hasn't done any good. I did a similar upgrade to another machine -- and it doesn't have this problem. Running md5sum on the above modules as well as /usr/bin/python3.7 generates the same numbers (comparing working and nonfunctional machines). Any clues/hints/ideas would be appreciated. There's a whole lot broken on this machine with python3 inoperable :( -F -- https://mail.python.org/mailman/listinfo/python-list
Re: closing image automatically in for loop , python
On Wed, 12 Apr 2017 04:18:45 -0700, Masoud Afshari wrote: > filename ="Ex_resample" +'_sdf_'+ str(n)+'.dat' > with open(filename, 'rb') as f: #read binary file data = np.fromfile(f, > dtype='float64', count=nx*ny) #float64 for Double precision float numbers > Ex = np.reshape(data, [ny, nx], order='F') > #print Ex.max() Your use of the 'with open(...) as f :' should automatically close access to filename once beyond that section. In your posting, the commented-out sections and garbled spacing would prevent anything useful from happening. Does that accurately reflect the code you're trying to run? -F -- https://mail.python.org/mailman/listinfo/python-list
Re: Data exchange between python script and bash script
On Tue, 04 Apr 2017 08:01:42 -0700, venkatachalam.19 wrote: > Hello All, > > I am writing a python code for processing a data obtained from a sensor. The > data from sensor is obtained by executing a python script. The data obtained > should be further given to another python module where the received data is > used for adjusting the location of an object. > > For achieving this, there is a central bash script, which runs both the > python modules parallel. Something like: > > python a.py & > python b.py & What is going on that two python scripts are needed? Which one generates the data needed by the bash script? > I am trying to return the sensor data to the bash .sh file, therefore it can > be provided to the other script. This, based on the online tutorials looks > like: > > sensor_data=$(python execute_sensor_process.py) & Presumably is simply getting the exit status code from the python interpreter, not the data, right? What are you seeing? > and the sensor_data is assigned by printing the required data in the > corresponding python script. For example, the data is printed in > execute_sensor_process.py as follows: > > print >>sys.stderr,sens_data > > By printing the data onto sys.stderr and assigning a return variable in the > bash, I am expecting the data to be assigned. Assigned to what? Some return variable in bash? What?? Why not use stdout? Either pipe the data from python directly into a (possibly modified) bash script, or into a file which gets read by the bash script. > But this is not happening. The sensor data is a dictionary and I like to have > this data for further analysis. I am not getting the data returned from the > python script on to the bash variable. Bash doesn't have dictionaries like python. Why is bash needed? > Can someone help me to understand why the code is not working? I tried other > approaches of function call such as You haven't given us enough of the code to really answer. > sensor_data=$`python execute_sensor_process.py` & > > python execute_sensor_process.py tempfile.txt & > kinexon_data=`cat tempfile.txt` & > > But none of the approaches are working. > > Thank you, > Venkatachalam Srinivasan I wonder if you could completely eliminate the bash script - do it all in python. I've written quite a few bash scripts, but not so many since I started using python. Only exception is for low level functions on systems without a functioning python. -- https://mail.python.org/mailman/listinfo/python-list
Re: Multiprocessing queue in py2.7
On Tue, 28 Mar 2017 15:38:38 -0400, Terry Reedy wrote: > On 3/28/2017 2:51 PM, Frank Miles wrote: >> I tried running a bit of example code from the py2.7 docs >> (16.6.1.2. Exchanging objects between processes) >> only to have it fail. The code is simply: >> # >> from multiprocessing import Process, Queue >> >> def f(q): >> q.put([42, None, 'hello']) >> >> if __name__ == '__main__': >> q = Queue() >> p = Process(target=f, args=(q,)) >> p.start() >> print q.get()# prints "[42, None, 'hello']" >> p.join() >> # --- > > Cut and pasted, this runs as specified on 2.7.13 on Win 10 > >> But what happens is f() fails: >> >> Traceback (most recent call last): >> File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in >> _bootstrap >> self.run() >> File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run >> self._target(*self._args, **self._kwargs) >> File "x.py", line 4, in f >> q.put([42, None, "Hello"]) >> AttributeError: 'int' object has no attribute 'put' > > This says that the arg bound to q in f is an int rather than a Queue. > Are you sure that you posted the code that you ran? > >> This is on a Debian jessie host, though eventually it needs to >> run on a raspberry pi 3 {and uses other library code that needs >> py2.7}. >> >> Thanks in advance for those marvelous clues! Argghh! I missed one stupid typo. Somehow had a '1' instead of the 'q' in the Process(..) line. My bad. Sorry for the noise, and thanks!! -F -- https://mail.python.org/mailman/listinfo/python-list
Multiprocessing queue in py2.7
I tried running a bit of example code from the py2.7 docs (16.6.1.2. Exchanging objects between processes) only to have it fail. The code is simply: # from multiprocessing import Process, Queue def f(q): q.put([42, None, 'hello']) if __name__ == '__main__': q = Queue() p = Process(target=f, args=(q,)) p.start() print q.get()# prints "[42, None, 'hello']" p.join() # --- But what happens is f() fails: Traceback (most recent call last): File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap self.run() File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run self._target(*self._args, **self._kwargs) File "x.py", line 4, in f q.put([42, None, "Hello"]) AttributeError: 'int' object has no attribute 'put' This is on a Debian jessie host, though eventually it needs to run on a raspberry pi 3 {and uses other library code that needs py2.7}. Thanks in advance for those marvelous clues! -F -- https://mail.python.org/mailman/listinfo/python-list
Re: Does anyone here use wxGlade on Linux?
On Thu, 11 Feb 2016 14:29:04 +, cl wrote: > I am trying out wxGlade on Linux, version 0.7.1 of wxGlade on xubuntu > 15.10. > > I have already written something using wxPython directly so I have the > basics (of my Python skills and the environment) OK I think. > > I am having a lot of trouble getting beyond the first hurdle of > creating a trivial Python GUI with wxGlade. Some of the problem is no > doubt that I'm unfamiliar with the interface but I seem to repeatedly > get to a situation where the interface won't respond to mouse clicks > (though the main menu items still work, I can Exit OK for instance). > > Is wxPython still buggy or is it really just down to my lack of > familiarity with it? Sure, there are bugs in wxPython, but they are "minor". I haven't tried using wxGlade, but if it's anything like the glade I tried using long ago there are issues in getting the two working together. -- https://mail.python.org/mailman/listinfo/python-list
Re: Which GUI?
On Fri, 24 Jul 2015 19:31:36 +0100, Paulo da Silva wrote: [snip] Which technology is better? matplotlib? tkinter? wxwidgets? qt? Sadly - I don't think wxpython has been ported to python3 yet. -- https://mail.python.org/mailman/listinfo/python-list
[issue23382] Maybe can not shutdown ThreadPoolExecutor when call the method of shutdown
miles added the comment: The attachment includes the patch file -- keywords: +patch nosy: +milesli Added file: http://bugs.python.org/file38274/thread.py.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue23382 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue23382] Maybe can not shutdown ThreadPoolExecutor when call the method of shutdown
New submission from miles: Maybe can not shutdown ThreadPoolExecutor when call the method shutdown. Though the variable of _shutdown is set to true in the method of shutdown, it may also reads the variable of _shutdown from cpu cache in the method of _worker, and the worst case is that it could see an out-of-date value of _shutdown forever. so need to acquire lock before reading the variable of _shutdown to make sure see an up-to-date value. the following is the new code: def _worker(executor_reference, work_queue): try: while True: work_item = work_queue.get(block=True) if work_item is not None: work_item.run() continue executor = executor_reference() shutdown = False with executor._shutdown_lock.acquire(): shutdown = executor._shutdown # Exit if: # - The interpreter is shutting down OR # - The executor that owns the worker has been collected OR # - The executor that owns the worker has been shutdown. if _shutdown or executor is None or shutdown: # Notice other workers work_queue.put(None) return del executor except BaseException: _base.LOGGER.critical('Exception in worker', exc_info=True) def shutdown(self, wait=True): with self._shutdown_lock: self._shutdown = True self._work_queue.put(None) if wait: for t in self._threads: t.join() -- components: 2to3 (2.x to 3.x conversion tool) messages: 235319 nosy: miles priority: normal severity: normal status: open title: Maybe can not shutdown ThreadPoolExecutor when call the method of shutdown versions: Python 3.2 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue23382 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue23382] Maybe can not shutdown ThreadPoolExecutor when call the method of shutdown
miles added the comment: the attachment includes the new code -- Added file: http://bugs.python.org/file38002/thread.py ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue23382 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue23382] Maybe can not shutdown ThreadPoolExecutor when call the method of shutdown
miles added the comment: The attachment includes the new code -- Added file: http://bugs.python.org/file37997/thread.py ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue23382 ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
Pythonic way to iterate through multidimensional space?
I need to evaluate a complicated function over a multidimensional space as part of an optimization problem. This is a somewhat general problem in which the number of dimensions and the function being evaluated can vary from problem to problem. I've got a working version (with loads of conditionals, and it only works to #dimensions = 10), but I'd like something simpler and clearer and less hard-coded. I've web-searched for some plausible method, but haven't found anything nice. Any recommendations where I should look, or what technique should be used? TIA! -- https://mail.python.org/mailman/listinfo/python-list
Re: Pythonic way to iterate through multidimensional space?
On Tue, 05 Aug 2014 20:06:05 +, Frank Miles wrote: I need to evaluate a complicated function over a multidimensional space as part of an optimization problem. This is a somewhat general problem in which the number of dimensions and the function being evaluated can vary from problem to problem. I've got a working version (with loads of conditionals, and it only works to #dimensions = 10), but I'd like something simpler and clearer and less hard-coded. I've web-searched for some plausible method, but haven't found anything nice. Any recommendations where I should look, or what technique should be used? TIA! A should have waited. The answer: itertools.product! very nice -- https://mail.python.org/mailman/listinfo/python-list
Re: wx (not responding)
On Tue, 14 Jan 2014 07:26:10 -0800, ngangsia akumbo wrote: When i run this code on my pc it actually runs but signals that the app is not responding. [snip most of the code]- def main(): ex = wx.App() Example(None) ex.Mainloop() if __name__ == __main__: main() When I tried to run it I got a AttributeError: 'App' object has no attribute 'Mainloop' Not sure whether your wx version might have a 'Mainloop'. -- https://mail.python.org/mailman/listinfo/python-list
Re: Experiences/guidance on teaching Python as a first programming language
On Thu, 12 Dec 2013 16:18:22 -0500, Larry Martell wrote: On Thu, Dec 12, 2013 at 11:51 AM, bob gailer bgai...@gmail.com wrote: On 12/11/2013 9:07 PM, Larry Martell wrote: Nope. Long before that I was working on computers that didn't boot when you powered them up, You had to manually key in a bootstrap program from the front panel switches. PDP8? RIM loader, BIN loader? Data General Nova 3 IIRC - wasn't that a machine that didn't even have 'subtract' - you had to complement and add (2 steps) ? -- https://mail.python.org/mailman/listinfo/python-list
Re: Python Front-end to GCC
On Tue, 22 Oct 2013 16:40:32 +, Steven D'Aprano wrote: On Tue, 22 Oct 2013 15:39:42 +, Grant Edwards wrote: No, I was thinking of an array. Arrays aren't automatically initialised in C. If they are static or global, then _yes_they_are_. They are zeroed. Not that I don't believe you, but do you have a reference for this? Because I keep finding references to uninitialised C arrays filled with garbage if you don't initialise them. Wait... hang on a second... /fires up the ol' trusty gcc [steve@ando c]$ cat array_init.c #includestdio.h int main() { int i; int arr[10]; for (i = 0; i 10; i++) { printf(arr[%d] = %d\n, i, arr[i]); } printf(\n); return 0; } [steve@ando c]$ gcc array_init.c [steve@ando c]$ ./a.out arr[0] = -1082002360 arr[1] = 134513317 arr[2] = 2527220 arr[3] = 2519564 arr[4] = -1082002312 arr[5] = 134513753 arr[6] = 1294213 arr[7] = -1082002164 arr[8] = -1082002312 arr[9] = 2527220 What am I missing here? What you're missing is that arr[] is an automatic variable. Put a static in front of it, or move it outside the function (to become global) and you'll see the difference. -- https://mail.python.org/mailman/listinfo/python-list
Re: PDF generator decision
On Tue, 14 May 2013 08:05:53 -0700, Christian Jurk wrote: Hi folks, This questions may be asked several times already, but the development of relevant software continues day-for-day. For some time now I've been using xhtml2pdf [1] to generate PDF documents from HTML templates (which are rendered through my Django-based web application. This have been working for some time now but I'm constantly adding new templates and they are not looking like I want it (sometimes bold text is bold, sometimes not, layout issues, etc). I'd like to use something else than xhtml2pdf. So far I'd like to ask which is the (probably) best way to create PDFs in Python (3)? It is important for me that I am able to specify not only background graphics, paragaphs, tables and so on but also to specify page headers/footers. The reason is that I have a bunch of documents to be generated (including Invoice templates, Quotes - stuff like that). Any advice is welcome. Thanks. [1] https://github.com/chrisglass/xhtml2pdf Reportlab works well in Python 2.x. Their _next_ version is supposed to work with Python3... {yes, not much help there} -- http://mail.python.org/mailman/listinfo/python-list
Re: Urgent:Serial Port Read/Write
On Thu, 09 May 2013 23:35:53 +0800, chandan kumar wrote: Hi all,I'm new to python and facing issue using serial in python.I'm facing the below error ser.write(port,command)NameError: global name 'ser' is not defined Please find the attached script and let me know whats wrong in my script and also how can i read data from serial port for the same script. [snip] if __name__ == __main__: CurrDir=os.getcwd() files = glob.glob('./*pyc') for f in files: os.remove(f) OpenPort(26,9600) SetRequest(ER_Address) #SysAPI.SetRequest('ER',ER_Address) print Test Scripts Execution complete What kind of 'port' is 26? Is that valid on your machine? My guess is that ser is NULL (because the open is failing, likely due to the port selection), leading to your subsequent problems. HTH.. -- http://mail.python.org/mailman/listinfo/python-list
Re: Reinforced Concrete: Mechanics and Design (5th Ed., James G. MacGregor James K. Wight)
On 1/18/2013 7:32 PM, Roy Smith wrote: Can whoever manages the mailing list block this bozo? In article db2dnygmdpv4agtnnz2dnuvz_o-dn...@giganews.com, kalvinmanual1 kalvinmanu...@gmail.com wrote: I have solutions manuals to all problems and exercises in these textbooks. To get one in an electronic format contact me at: kalvinmanual(at)gmail(dot)com and let me know its title, author and edition. Please this service is NOT free. I don't use the mailing list, but I'll try another method for blocking this alleged human. -- http://mail.python.org/mailman/listinfo/python-list
Re: Spam source (Re: Horror Horror Horror!!!!!)
On Sunday, November 18, 2012 8:18:53 PM UTC-6, Mark Lawrence wrote: On 18/11/2012 19:31, Terry Reedy wrote: The question was raised as to how much spam comes from googlegroups. I don't know the answer but I take the greatest pleasure in hurtling onto the dread googlegroups and gmane to report spam. Thankfully it's easy as the amount I receive via gmane is effectively zero. YMMV? It now takes two people reporting the same spam to get google groups to do much about it. I just reported this one as well, though. -- http://mail.python.org/mailman/listinfo/python-list
Re: Unpaking Tuple
On 10/9/2012 1:07 AM, Bob Martin wrote: in 682592 20121008 232126 Prasad, Ramit ramit.pra...@jpmorgan.com wrote: Thomas Bach wrote:=0D=0A Hi there,=0D=0A =0D=0A On Sat, Oct 06, 2012 at = 03:08:38PM +, Steven D'Aprano wrote:=0D=0A =0D=0A my_tuple =3D my_= tuple[:4]=0D=0A a,b,c,d =3D my_tuple if len(my_tuple) =3D=3D 4 else (my_= tuple + (None,)*4)[:4]=0D=0A =0D=0A =0D=0A Are you sure this works as y= ou expect? I just stumbled over the following:=0D=0A =0D=0A $ python=0D= =0A Python 3=2E2=2E3 (default, Jun 25 2012, 23:10:56)=0D=0A [GCC 4=2E7=2E= 1] on linux2=0D=0A Type help, copyright, credits or license for mo= re information=2E=0D=0A split =3D ['foo', 'bar']=0D=0A head, tail= =3D split if len(split) =3D=3D 2 else split[0], None=0D=0A head=0D=0A= ['foo', 'bar']=0D=0A tail=0D=0A =0D=0A =0D=0A I don't get it! = Could someone help me, please? Why is head not 'foo'=0D=0A and tail not 'b= ar'?=0D=0A =0D=0A Regards,=0D=0AThomas=0D=0A --=0D=0A=0D=0AI think yo= u just need to wrap the else in parenthesis so the=0D=0Aelse clause is trea= ted as a tuple=2E Without the parenthesis =0D=0AI believe it is grouping th= e code like this=2E=0D=0A=0D=0Ahead, tail =3D (split if len(split) =3D=3D 2= else split[0] ), None=0D=0A=0D=0AYou want:=0D=0Ahead, tail =3D split if le= n(split) =3D=3D 2 else (split[0], None )=0D=0A=0D=0A=0D=0ARamit=0D=0AThis e= mail is confidential and subject to important disclaimers and=0D=0Aconditio= ns including on offers for the purchase or sale of=0D=0Asecurities, accurac= y and completeness of information, viruses,=0D=0Aconfidentiality, legal pri= vilege, and legal entity disclaimers,=0D=0Aavailable at http://www=2Ejpmorg= an=2Ecom/pages/disclosures/email=2E How does one unpack this post? ;-) There are a number of programs for converting ends of lines between Linux format, Windows format, and Mac formats. You could try running all of those programs your operating system provides on that text, then checking which one of them gives the most readable results. -- http://mail.python.org/mailman/listinfo/python-list
Re: Spam source (Re: Horror Horror Horror!!!!!)
On Sunday, November 18, 2012 1:35:00 PM UTC-6, Terry Reedy wrote: The question was raised as to how much spam comes from googlegroups. Not all, but more that half, I believe. This one does. From: MoneyMaker livewebcams...@gmail.com ... Message-ID: 2d2a0b98-c587-4459-9489-680b1ddc4...@googlegroups.com -- Terry Jan Reedy That depends on your definition of spam. This one does not appear to be trying to sell anything, and therefore does not meet some of the stricter definitions. Definitely off-topic, though. -- http://mail.python.org/mailman/listinfo/python-list
Re: datetime issue
On 10/31/2012 2:35 PM, ru...@yahoo.com wrote: On 10/31/2012 09:11 AM, Grant Edwards wrote: On 2012-09-16, ?? nikos.gr...@gmail.com wrote: Iam positng via google groups using chrome, thats all i know. Learn something else. Google Groups is seriously and permanently broken, and all posts from Google Groups are filtered out and ignored by many people (including myself -- I only saw this because somebody else replied to it). Seriously? That's pretty subjective. I manage to use it without major problems so it couldn't be that bad. I posted previously on how to use it without the double posts or the double spacing. If you're using it for reasonable purposes, you won't encounter its worst flaw. It's much too easy for spammers to use for posting spam. I'd estimate that about 99% of the world's newsgroups spam in English is posted through Google Groups. -- http://mail.python.org/mailman/listinfo/python-list
Re: datetime issue
On 10/31/2012 4:38 PM, Mark Lawrence wrote: On 31/10/2012 19:35, ru...@yahoo.com wrote: On 10/31/2012 09:11 AM, Grant Edwards wrote: On 2012-09-16, ?? nikos.gr...@gmail.com wrote:. Broken? Yes. But so is every piece of software in one way or another. Thunderbird is one of the most perpetually buggy pierces of software I have ever used on a continuing basis Please provide evidence that Thunderbird is buggy. I use it quite happily, don't have problems, and have never seen anybody complaining about it. Why should they complain about it in this newsgroup? Most of the people who complain about it know that complaining in newsgroup mozilla.support.thunderbird is much more likely to get any problems fixed. Rather few newsgroups servers are allowed to carry that newsgroup; news.mozilla.org is one of them. The newsgroups section of Thunderbird seems to have more bugs than the email section, partly because there are more volunteers interested in working on the email section. -- http://mail.python.org/mailman/listinfo/python-list
Re: Python garbage collector/memory manager behaving strangely
On 9/16/2012 9:12 PM, Dave Angel wrote: On 09/16/2012 09:07 PM, Jadhav, Alok wrote: Hi Everyone, I have a simple program which reads a large file containing few million rows, parses each row (`numpy array`) and converts into an array of doubles (`python array`) and later writes into an `hdf5 file`. I repeat this loop for multiple days. After reading each file, i delete all the objects and call garbage collector. When I run the program, First day is parsed without any error but on the second day i get `MemoryError`. I monitored the memory usage of my program, during first day of parsing, memory usage is around **1.5 GB**. When the first day parsing is finished, memory usage goes down to **50 MB**. Now when 2nd day starts and i try to read the lines from the file I get `MemoryError`. Following is the output of the program. Is it a 32-bit program? If so, expect the maximum amount of memory it can use to hold the program, its current dataspace, and images of all the files it has open to be about 3.5 GB, even if it is running on a 64-bit computer with over 4 GB of memory. It seems that 32-bit addresses can only refer to 4 GB of memory, and part of that 4 GB must be used for whatever the operating system needs for running 32-bit programs. With some of the older compilers, only 2 GB can be used for the program; the other 2 GB is reserved for the operating system. How practical would it be to have that program run twice a day? The first time, it should ignore all the data for the second half of the day; the second time, it should ignore all the data for the first half of the day. -- http://mail.python.org/mailman/listinfo/python-list
Re: Obnoxious postings from Google Groups
On 9/16/2012 8:18 AM, Ben Finney wrote: Νικόλαος Κούρας nikos.gr...@gmail.com writes: Iam sorry i didnt do that on purpose and i dont know how this is done. Iam positng via google groups using chrome, thats all i know. It is becoming quite clear that some change has happened recently to Google Groups that makes posts coming from there rather more obnoxious than before. And there doesn't seem to be much its users can do except use something else. You're probably referring to their change in the way they handle end-of-lines, which is now incompatible with most newsreaders, especially with multiple levels of quoting. The incompatibility tends to insert a blank line after every line. With multiple levels of quoting, this gives blank line groups that often roughly double in size for every level of quoting. Robert Miles -- http://mail.python.org/mailman/listinfo/python-list
Re: Obnoxious postings from Google Groups
On 9/16/2012 10:44 AM, pandora.ko...@gmail.com wrote: Whaen i tried to post just now by hitting sumbit, google groups told me that the following addresssed has benn found in this thread! i guess is used them all to notify everything! cdf072b2-7359-4417-b1e4-d984e4317...@googlegroups.com mailman.774.1347735926.27098.python-l...@python.org nikos.gr...@gmail.com When you try to post anything to a newsgroup, they try to use their method of preventing email spammers from getting email addresses by complaining about any email addresses that look like the could be valid. If you want to make the post compatible with their method, select the option to edit the post when they offer it, and change the last three characters before each @ in an email address to three periods (...). The submit should then work. -- http://mail.python.org/mailman/listinfo/python-list
Re: Obnoxious postings from Google Groups
On 9/16/2012 8:14 PM, alex23 wrote: On Sep 17, 10:55 am, Roy Smith r...@panix.com wrote: They didn't buy the service. They bought the data. Well, they really bought both, but the data is all they wanted. I thought they'd taken most of the historical data offline now too? Some of it, but they still had my newsgroups posts from 15 years ago the last time I looked. They appear to have taken most of any Fidonet data offline, though. They appear to be taking some of the spam and other abuse offline after it's reported by at least two people, but rather slowly and not keeping up with the amount that's posted. For those of you running Linux: You may want to look into whether NoCeM is compatible with your newsreader and your version of Linux. It checks newsgroups news.lists.filters and alt.nocem.misc for lists of spam posts, and will automatically hide them for you. Not available for other operating systems, though, except possibly Unix. NoCeM http://www.cm.org/nocem.html bleachbot http://home.httrack.net/~nocem/ -- http://mail.python.org/mailman/listinfo/python-list
Re: Obnoxious postings from Google Groups
On 9/16/2012 8:18 AM, Ben Finney wrote: Νικόλαος Κούρας nikos.gr...@gmail.com writes: Iam sorry i didnt do that on purpose and i dont know how this is done. Iam positng via google groups using chrome, thats all i know. It is becoming quite clear that some change has happened recently to Google Groups that makes posts coming from there rather more obnoxious than before. And there doesn't seem to be much its users can do except use something else. Using Google Groups for posting to Usenet has been a bad idea for a long time, but now it just seems to be a sure recipe for annoying the rest of us. Again, not something you have much control over, except to stop using Google Groups. Could this mean that Google wants all the spam posted through Google Groups to look obnoxious to the rest of Usenet that the spammers will go elsewhere? Robert Miles -- http://mail.python.org/mailman/listinfo/python-list
Re: Dumping all the sql statements as backup
On 7/25/2012 8:56 AM, andrea crotti wrote: I have some long running processes that do very long simulations which at the end need to write things on a database. At the moment sometimes there are network problems and we end up with half the data on the database. The half-data problem is probably solved easily with sessions and sqlalchemy (a db-transaction), but still we would like to be able to keep a backup SQL file in case something goes badly wrong and we want to re-run it manually.. This might also be useful if we have to rollback the db for some reasons to a previous day and we don't want to re-run the simulations.. Anyone did something similar? It would be nice to do something like: with CachedDatabase('backup.sql'): # do all your things I'm now starting to do something similar, but in C, not Python. Apparently not using SQL. The simulations this is for often last a month or more. -- http://mail.python.org/mailman/listinfo/python-list
Re: OT: Text editors
On 7/29/2012 5:28 AM, Mark Lawrence wrote: On 29/07/2012 06:08, Ben Finney wrote: Tim Chase python.l...@tim.thechases.com writes: On Sat, Jul 28, 2012 at 6:29 PM, Mark Lawrence wrote: I highly recommend the use of notepad++. If anyone knows of a better text editor for Windows please let me know :) I highly recommend not tying your editor skills to a single OS, especially one as ornery for programmers as Windows. I'll advocate for Vim which is crazy-powerful and works nicely on just about any platform I touch. Others will advocate for Emacs, which I can't say fits the way my brain works but it's also powerful and loved by many. Right. I'm in Tim's position, but reversed: my preference is for Emacs but Vim is a fine choice also. They are mature, well-supported with regular updates and a massive library of plug-ins for different uses, have a huge community to help you, and work on all major programming OSen. The ubiquity of these two platforms makes a worthwhile investment of time spent in learning at least one if not both. I use both frequently in my work for different things, and they are good for pretty much any task involving manipulation of text. Learn one of Emacs or Vim well, and you won't need to worry about text editors again. Point taken, snag being I've never used any nix box in anger. This thread reminds of the good 'ole days when I were a lad using TPU on VMS. Have we got any VMS aficionados here? I used to run two VMS superminis. I'm not sure whether I still could, though. Robert Miles -- http://mail.python.org/mailman/listinfo/python-list
Re: catch UnicodeDecodeError
On 7/26/2012 5:51 AM, Jaroslav Dobrek wrote: And the cool thing is: you can! :) In Python 2.6 and later, the new Py3 open() function is a bit more hidden, but it's still available: from io import open filename = somefile.txt try: with open(filename, encoding=utf-8) as f: for line in f: process_line(line) # actually, I'd use process_file(f) except IOError, e: print(Reading file %s failed: %s % (filename, e)) except UnicodeDecodeError, e: print(Some error occurred decoding file %s: %s % (filename, e)) Thanks. I might use this in the future. try: for line in f: # here text is decoded implicitly do_something() except UnicodeDecodeError(): do_something_different() This isn't possible for syntactic reasons. Well, you'd normally want to leave out the parentheses after the exception type, but otherwise, that's perfectly valid Python code. That's how these things work. You are right. Of course this is syntactically possible. I was too rash, sorry. In confused it with some other construction I once tried. I can't remember it right now. But the code above (without the brackets) is semantically bad: The exception is not caught. The problem is that vast majority of the thousands of files that I process are correctly encoded. But then, suddenly, there is a bad character in a new file. (This is so because most files today are generated by people who don't know that there is such a thing as encodings.) And then I need to rewrite my very complex program just because of one single character in one single file. Why would that be the case? The places to change should be very local in your code. This is the case in a program that has many different functions which open and parse different types of files. When I read and parse a directory with such different types of files, a program that uses for line in f: will not exit with any hint as to where the error occurred. I just exits with a UnicodeDecodeError. That means I have to look at all functions that have some variant of for line in f: in them. And it is not sufficient to replace the for line in f part. I would have to transform many functions that work in terms of lines into functions that work in terms of decoded bytes. That is why I usually solve the problem by moving fles around until I find the bad file. Then I recode or repair the bad file manually. Would it be reasonable to use pieces of the old program to write a new program that prints the name for an input file, then searches that input file for bad characters? If it doesn't find any, it can then go on to the next input file, or show a message saying that no bad characters were found. -- http://mail.python.org/mailman/listinfo/python-list
Re: the meaning of r?.......ï¾
On 7/23/2012 1:10 PM, Dennis Lee Bieber wrote: On Mon, 23 Jul 2012 16:42:51 +0200, Henrik Faber hfa...@invalid.net declaimed the following in gmane.comp.python.general: If that was written by my coworkers, I'd strangle them. My first real assignment, 31 years ago, was porting an application to CDC MP-60 FORTRAN (what I called FORTRAN MINUS TWO). This was a minimal FORTRAN implementation in which one could not do things like: ix = 20 call xyz(ix, ix+2, ix-2) forcing us to produce such abominations as ix = 20 jinx = ix + 2 minx = ix - 2 call xyz(ix, jinx, minx) One of my first jobs involved helping maintain a Fortran program originally written for an early IBM 360 with only 64 kilobytes of memory. It included an assembler routine to do double precision floating point (that early computer couldn't do it as hardware instructions) and another assembler routine to do dynamic overlays - load one more subroutine into memory just before calling it (and then usually overwriting it with the next subroutine to be called after finishing the first one). Originally, the computer operators had to reload the operating system when this program finished, because it had to overwrite the operating system in order to have enough memory to run. When I worked on it, it ran under IBM's DOS (for mainframes). I never saw any attempts to make it run under Microsoft's DOS (for microcomputers). -- http://mail.python.org/mailman/listinfo/python-list
Re: Encapsulation, inheritance and polymorphism
On 7/23/2012 11:18 AM, Albert van der Horst wrote: In article 5006b48a$0$29978$c3e8da3$54964...@news.astraweb.com, Steven D'Aprano steve+comp.lang.pyt...@pearwood.info wrote: SNIP. Even with a break, why bother continuing through the body of the function when you already have the result? When your calculation is done, it's done, just return for goodness sake. You wouldn't write a search that keeps going after you've found the value that you want, out of some misplaced sense that you have to look at every value. Why write code with unnecessary guard values and temporary variables out of a misplaced sense that functions must only have one exit? Example from recipee's: Stirr until the egg white is stiff. Alternative: Stirr egg white for half an hour, but if the egg white is stiff keep your spoon still. (Cooking is not my field of expertise, so the wording may not be quite appropriate. ) -- Steven Groetjes Albert Note that you forgot applying enough heat to do the cooking. -- http://mail.python.org/mailman/listinfo/python-list
Re: Python Interview Questions
On 7/10/2012 1:08 PM, Demian Brecht wrote: I also judge candidates on their beards (http://www.wired.com/wiredenterprise/2012/06/beard-gallery/). If the beard's awesome enough, no questions needed. They're pro. You should hire me quickly, then, since I have a beard, already turning partly gray. Never mind that the computer languages I have studied enough to write even one program don't yet include Python. -- http://mail.python.org/mailman/listinfo/python-list
Re: lambda in list comprehension acting funny
On 7/11/2012 11:39 PM, Dennis Lee Bieber wrote: On 12 Jul 2012 03:53:03 GMT, Steven D'Aprano steve+comp.lang.pyt...@pearwood.info declaimed the following in gmane.comp.python.general: ALU class? Googling gives me no clue. Arithmetic/Logic Unit http://en.wikipedia.org/wiki/Arithmetic_logic_unit http://en.wikipedia.org/wiki/74181 {diversion: http://en.wikipedia.org/wiki/VAX-11/780 -- incredible... that used to be considered a super-mini when I worked on them; and now would be shamed by most laptops except for the ability to support so many users concurrently (let me know when a Windows laptop supports 32 VT-100 class connections G)} Installing Cygwin (a Linux emulation) under Windows appears to add some VT-100 support but without an easy way to find documentation on whether it can support 32 of them or not. I used to work on a VAX 11/780 and also a VAX 8600. Cygwin has a version of Python available, in case you're interested. Robert Miles -- http://mail.python.org/mailman/listinfo/python-list
Re: Some posts do not show up in Google Groups
On 4/30/2012 1:20 AM, Frank Millman wrote: Hi all For a while now I have been using Google Groups to read this group, but on the odd occasion when I want to post a message, I use Outlook Express, as I know that some people reject all messages from Google Groups due to the high spam ratio (which seems to have improved recently, BTW). From time to time I see a thread where the original post is missing, but the follow-ups do appear. My own posts have shown up with no problem. Now, in the last month, I have posted two messages using Outlook Express, and neither of them have shown up in Google Groups. I can see replies in OE, so they are being accepted. I send to the group gmane.comp.python.general. Does anyone know a reason for this, or have a solution? Frank Millman I can't answer your main question, but I have seen a reason for Google Groups appearing to have less spam. That was about the time Google Groups introduced a new interface which makes it obvious if anyone has already reported a message as spam (or any other type of abuse), and also made those messages inaccessible after two different users reported the same message as abuse. Therefore, many spammers are moving toward newsgroups where no one reports the spam. Robert Miles -- http://mail.python.org/mailman/listinfo/python-list
Re: Some posts do not show up in Google Groups
On 5/1/2012 1:12 AM, Frank Millman wrote: On Apr 30, 8:20 am, Frank Millmanfr...@chagford.com wrote: Hi all For a while now I have been using Google Groups to read this group, but on the odd occasion when I want to post a message, I use Outlook Express, as I know that some people reject all messages from Google Groups due to the high spam ratio (which seems to have improved recently, BTW). From time to time I see a thread where the original post is missing, but the follow-ups do appear. My own posts have shown up with no problem. Now, in the last month, I have posted two messages using Outlook Express, and neither of them have shown up in Google Groups. I can see replies in OE, so they are being accepted. I send to the group gmane.comp.python.general. Does anyone know a reason for this, or have a solution? Frank Millman Thanks for the replies. I am also coming to the conclusion that Google Groups is no longer fit-for-purpose. Ironically, here are two replies that I can see in Outlook Express, but do not appear in Google Groups. Reply from Benjamin Kaplan - I believe the mail-to-news gateway has trouble with HTML messages. Try sending everything as plain text and see if that works. I checked, and all my posts were sent in plain text. Reply from Terry Reedy - Read and post through news.gmane.org I have had a look at this before, but there is one thing that Google Groups does that no other reader seems to do, and that is that messages are sorted according to thread-activity, not original posting date. This makes it easy to see what has changed since the last time I checked. All the other ones I have looked at - Outlook Express, Thunderbird, and gmane.org, sort by original posting date, so I have to go backwards to see if any threads have had any new postings. Maybe there is a setting that I am not aware of. Can anyone enlighten me? Thanks Frank Thunderbird appears to change the sorting order for me based on whether I tell it to put the newest messages at the top of its window or at the bottom. Some newsgroups servers appear to discard every post they get that contains any HTML. This may be because Google Groups often adds HTML even if you don't ask for it, and those servers want to avoid the poor signal to noise ratio from Google Groups. Robert Miles -- http://mail.python.org/mailman/listinfo/python-list
Re: Create directories and modify files with Python
On 5/1/2012 5:51 AM, deltaquat...@gmail.com wrote: Il giorno martedì 1 maggio 2012 01:57:12 UTC+2, Irmen de Jong ha scritto: [snip] Focus on file input and output, string manipulation, and look in the os module for stuff to help scanning directories (such as os.walk). Irmen Thanks for the directions. By the way, can you see my post in Google Groups? I'm not able to, and I don't know why. Sergio They may have copied the Gmail idea that you never need to see anything anything you posted yourself. When I post anything using Google Groups, my posts usually show but often very slowly - as in 5 minutes after I tell it to post. Robert Miles -- http://mail.python.org/mailman/listinfo/python-list
Re: Half-baked idea: list comprehensions with while
On 27/04/2012 5:57 a.m., Kiuhnm wrote: On 4/26/2012 19:48, Paul Rubin wrote: Roy Smithr...@panix.com writes: x = [a for a in iterable while a] from itertools import takewhile x = takewhile(bool, a) I see that as a 'temporary' solution, otherwise we wouldn't need 'if' inside of list comprehensions either. Kiuhnm We have if inside list comprehensions? I didn't know that, could you provide an example? -- http://mail.python.org/mailman/listinfo/python-list
Re: Strange __import__() behavior
On Wed, 25 Apr 2012 23:03:36 +0200, Kiuhnm wrote: On 4/25/2012 22:05, Frank Miles wrote: I have an exceedingly simple function that does a named import. It works perfectly for one file r- and fails for the second x. If I reverse the order of being called, it is still x that fails, and r still succeeds. os.access() always reports that the file is readable (i.e. true) If I simply call up the python interpreter (python 2.6 - Debian stable) and manually import x - there is no problem - both work. Similarly typing the __import__(x) works when typed directly at the python prompt. Both 'x' and 'r' pass pychecker with no errors. The same error occurs with winpdb - the exception message says that the file could not be found. 'file' reports that both files are text files, and there aren't any strange file-access permissions/attributes. Here's the function that is failing: def named_import(fname, description) : import os pname= fname + '.py' print ENTRY FILE, pname, : acces=, os.access(pname, os.R_OK) try : X=__import__(fname) x= [ X.cmnds, X.variables ] except ImportError : print failed return x This is the first time I've needed to import a file whose name couldn't be specified in the script, so there's a chance that I've done something wrong, but it seems very weird that it works in the CL interpreter and not in my script. TIA for any hints or pointers to the relevant overlooked documentation! I can't reproduce your problem on my configuration. Anyway, you should note that if x.pyc and r.pyc are present, __import__ will try to import them and not the files x.py and r.py. Try deleting x.pyc and r.pyc. Kiuhnm You are fast in replying! I nuked my query (within a few minutes of posting) when I discovered the reason. Perhaps it persisted in some domain. I'd forgotten that the python script containing the described function was not the file itself, but a link to the script. When I executed the script (er, link to the script) - even with winpdb - apparently __import__ examined the directory where the actual file resided. In _that_ directory, only 'r' existed, no 'x'. So thanks for trying, there was no way you (or anyone) could have seen that the script was just a link to a script... -F -- http://mail.python.org/mailman/listinfo/python-list
Strange __import__() behavior
I have an exceedingly simple function that does a named import. It works perfectly for one file r- and fails for the second x. If I reverse the order of being called, it is still x that fails, and r still succeeds. os.access() always reports that the file is readable (i.e. true) If I simply call up the python interpreter (python 2.6 - Debian stable) and manually import x - there is no problem - both work. Similarly typing the __import__(x) works when typed directly at the python prompt. Both 'x' and 'r' pass pychecker with no errors. The same error occurs with winpdb - the exception message says that the file could not be found. 'file' reports that both files are text files, and there aren't any strange file-access permissions/attributes. Here's the function that is failing: def named_import(fname, description) : import os pname= fname + '.py' print ENTRY FILE, pname, : acces=, os.access(pname, os.R_OK) try : X=__import__(fname) x= [ X.cmnds, X.variables ] except ImportError : print failed return x This is the first time I've needed to import a file whose name couldn't be specified in the script, so there's a chance that I've done something wrong, but it seems very weird that it works in the CL interpreter and not in my script. TIA for any hints or pointers to the relevant overlooked documentation! -F -- http://mail.python.org/mailman/listinfo/python-list
[issue13405] Add DTrace probes
Changes by Chris Miles miles.ch...@gmail.com: -- nosy: -chrismiles ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue13405 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
Re: Complex sort on big files
Hi Dan, Thanks for the reply. On Mon, Aug 1, 2011 at 5:45 PM, Dan Stromberg drsali...@gmail.com wrote: Python 2.x, or Python 3.x? Currently Python 2.x. What are the types of your sort keys? Both numbers and strings. If you're on 3.x and the key you need reversed is numeric, you can negate the key. I did wonder about that. Would that not be doable also in Python 2.7, using sorted(key=...)? If you're on 2.x, you can use an object with a __cmp__ method to compare objects however you require. OK, right. Looking at the HowTo/Sorting again [1] and the bit about cmp_to_key, could you also achieve the same effect by returning a key with custom implementations of rich comparison functions? You probably should timsort the chunks (which is the standard list_.sort() - it's a very good in-memory sort), and then merge them afterward using the merge step of merge sort. Yes, that's what I understood by the activestate recipe [2]. So I guess my question boils down to, how do you do the merge step for a complex sort? (Assuming each chunk had been completely sorted first.) Maybe the answer is also to construct a key with custom implementation of rich comparisons? Now I'm also wondering about the best way to sort each chunk. The examples in [1] of complex sorts suggest the best way to do it is to first sort by the secondary key, then sort by the primary key, relying on the stability of the sort to get the desired outcome. But would it not be better to call sorted() once, supplying a custom key function? (As an aside, at the end of the section in [1] on Sort Stability and Complex sorts, it says The Timsort algorithm used in Python does multiple sorts efficiently because it can take advantage of any ordering already present in a dataset. - I get that that's true, but I don't see how that's relevant to this strategy for doing complex sorts. I.e., if you sort first by the secondary key, you don't get any ordering that's helpful when you subsequently sort by the primary key. ...?) (Sorry, another side question, I'm guessing reading a chunk of data into a list and using Timsort, i.e., calling list.sort() or sorted(mylist), is quicker than using bisect to keep the chunk sorted as you build it?) heapq's not unreasonable for the merging, but I think it's more common to use a short list. Do you mean a regular Python list, and calling min()? I have a bunch of Python sorts at http://stromberg.dnsalias.org/svn/sorts/compare/trunk/ - if you find your need is specialized (EG, 3.x sorting by a secondary key that's a string, in reverse), you could adapt one of these to do what you need. Thanks. Re 3.x sorting by a secondary key that's a string, in reverse, which one were you thinking of in particular? heapq is not bad, but if you find you need a logn datastructure, you might check out http://stromberg.dnsalias.org/~dstromberg/treap/ - a treap is also logn, but has a very good amortized cost because it sacrifices keeping things perfectly balanced (and rebalancing, and rebalancing...) to gain speed. But still, you might be better off with a short list and min. Thanks, that's really helpful. Cheers, Alistair [1] http://wiki.python.org/moin/HowTo/Sorting/ [2] http://code.activestate.com/recipes/576755-sorting-big-files-the-python-26-way/ On Mon, Aug 1, 2011 at 8:33 AM, aliman aliman...@googlemail.com wrote: Hi all, Apologies I'm sure this has been asked many times, but I'm trying to figure out the most efficient way to do a complex sort on very large files. I've read the recipe at [1] and understand that the way to sort a large file is to break it into chunks, sort each chunk and write sorted chunks to disk, then use heapq.merge to combine the chunks as you read them. What I'm having trouble figuring out is what to do when I want to sort by one key ascending then another key descending (a complex sort). I understand that sorts are stable, so I could just repeat the whole sort process once for each key in turn, but that would involve going to and from disk once for each step in the sort, and I'm wondering if there is a better way. I also thought you could apply the complex sort to each chunk before writing it to disk, so each chunk was completely sorted, but then the heapq.merge wouldn't work properly, because afaik you can only give it one key. Any help much appreciated (I may well be missing something glaringly obvious). Cheers, Alistair [1] http://code.activestate.com/recipes/576755-sorting-big-files-the-python-26-way/ -- http://mail.python.org/mailman/listinfo/python-list -- -- Alistair Miles Head of Epidemiological Informatics Centre for Genomics and Global Health http://cggh.org The Wellcome Trust Centre for Human Genetics Roosevelt Drive Oxford OX3 7BN United Kingdom Web: http://purl.org/net/aliman Email: aliman...@gmail.com Tel: +44 (0)1865 287669 -- http://mail.python.org/mailman/listinfo/python-list
[issue8108] test_ftplib fails with OpenSSL 0.9.8m
Darryl Miles darryl.mi...@darrylmiles.org added the comment: With regards to create test cases for certain situations, sure this would be possible but not with pure python since your APIs deny/inhibit the particular things required to force a situation for a test case. With regards to SSL_peek() blocking, you'd need to explain yourself better on that one. The patch has been tested with the test cases from Python SVN enough to be happy they run ok. Maybe you have some offline not yet checked in SSL test cases you are referring to. To clarify why this is being done, if there is unread data then SSL_shutdown() will never return 1. Maybe you can simulate this situation by using SSL_write() with 1 byte payloads and a 10ms delay between each SSL_write() of the QUIT response message (you are trying to simulate network propagation delay). Then you have a client that tries to do unwrap() right after having sent the quit command, but makes no attempt to receive the response. I'll leave you guys too it about how you want to handle things with python (i.e. to make the design choice trade offs). I think all the points have been covered. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8108 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8108] test_ftplib fails with OpenSSL 0.9.8m
Darryl Miles darryl.mi...@darrylmiles.org added the comment: I am unable to get make test to run from an unpatched version in SVN (details below of make output). Please find attached an updated patch for your consideration (and testing, as I can't test it due to 'make test' SIGSEGV on CentOS 5.4 i386). Patch Notes: 1) Some thing that concern me, the unwrap() philosophy looks to be used to remove SSL from the Python high-level socket handle, so you can go back to plaintext mode. You can ONLY perform an unwrap() AFTER an SSL_shutdown()==1 has been observed (you need to wait for the other end to do something voluntarily). So you must retry the SSL_shutdown() over and over while you sleep-wait for IO, so this is akin to calling the ssl.shutdown(ssl.SSL_SHUTDOWN_MODE_BOTH) and getting back success. Also if it is your intention to properly implement an unwrap() like this you should disable IO read-ahead mode before calling shutdown for the second time, SSL_set_read_ahead(ssl, 0). This stops OpenSSL from eating too many bytes accidentally (probably from the kernel into its own buffers), from the inbound IO stream, which may not be SSL protocol data, it maybe plain text data (behind the last byte of SSL protocol data). 2) Due to the IO waiting it looks also necessary to copy the setup of SSL_set_nbio() from the read/write paths so the check_socket_and_wait_for_timeout() works in sympathy to the callers IO timeout reconfiguration. 3) My patch presumes the allocation of the type struct PySSLObject uses calloc() or some other memory zeroing strategy. There is a new member in that struct to track if SSL_shutdown() has previously returned a zero value. 4) The SSL_peek() error path needs checking to see if the error return is consistent with the Python paradigm. 5) Please check I have understand the VARARGS method correctly. I have made the default to SSL_SHUTDOWN_MODE_SENT (despite backward compatibly being SSL_SHUTDOWN_MODE_ONCE), this is because I would guess that most high-level applications did not intend to use it in raw mode; nor be bothered with the issues surrounding correct usage. I would guess high-level applications wanted Python to take the strain here. 6) I suspect you need to address your unwrap() policy a little better, the shutdown operation and the unwrap() are two different matters. The shutdown() should indicate success or not (in respect of the mode being requested, raw mode is a tricky one as the caller would want to the exact error return so it can do the correct thing), unwrap() should itself call ssl.shutdown(ssl.SSL_SHUTDOWN_MODE_BOTH) until it sees success and then remove the socket (and deallocate SSL objects). As things stand SSL_SHUTDOWN_MODE_ONCE does not work in a useful way since the error returns are not propagated to the caller, because unwrap is mixed into this. So that would still need fixing. building works ok, testing fails with SIGSEGV. Is this something to do with no having _bsddb built ? I have db-4.3 working. Maybe someone can reply by email on the matter. # make running build running build_ext building dbm using gdbm Python build finished, but the necessary bits to build these modules were not found: bsddb185 sunaudiodev To find the necessary bits, look in setup.py in detect_modules() for the module's name. running build_scripts # make test running build running build_ext building dbm using gdbm Python build finished, but the necessary bits to build these modules were not found: bsddb185 sunaudiodev To find the necessary bits, look in setup.py in detect_modules() for the module's name. running build_scripts find ./Lib -name '*.py[co]' -print | xargs rm -f ./python -Wd -3 -E -tt ./Lib/test/regrtest.py -l == CPython 2.7a4+ (trunk:79902M, Apr 11 2010, 16:38:55) [GCC 4.1.2 20080704 (Red Hat 4.1.2-46)] == Linux-2.6.18-164.15.1.el5-i686-with-redhat-5.4-Final == /root/python-svn/build/test_python_29248 test_grammar test_opcodes test_dict test_builtin test_exceptions test_types test_unittest test_doctest test_doctest2 test_MimeWriter test_SimpleHTTPServer test_StringIO test___all__ /root/python-svn/Lib/test/test___all__.py:10: DeprecationWarning: in 3.x, the bsddb module has been removed; please use the pybsddb project instead import bsddb /root/python-svn/Lib/bsddb/__init__.py:67: PendingDeprecationWarning: The CObject type is marked Pending Deprecation in Python 2.7. Please use capsule objects instead. import _bsddb make: *** [test] Segmentation fault -- Added file: http://bugs.python.org/file16872/Modules__ssl.c.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8108 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list
[issue8108] test_ftplib fails with OpenSSL 0.9.8m
Darryl Miles darryl.mi...@darrylmiles.org added the comment: To explain why you need 2 modes, a client/server would expect to do the following pseudo actions for maximum efficiency: set_socket_timeout(600_SECONDS) # or useful default send_data_over_ssl(QUIT\r\n) shutdown(SSL_SHUTDOWN_MODE_SENT) flush_data_down_to_socket() # maybe automatic/implied (OpenSSL users with custom BIO layers should be aware of this step) shutdown(socket, SHUT_WR) # this is optional, TCP socket level shutdown recv_data_over_ssl() = 250 Bye bye!\r\n # this will take time to arrive set_socket_io_timeout(5_SECONDS) shutdown(SSL_SHUTDOWN_MODE_BOTH) # this is optional! some clients may choose to skip it entirely close()/unwrap() A server would: recv_data_over_ssl() = QUIT\r\n # would be sitting idle waiting for this command send_data_over_ssl(250 Bye bye!\r\n) shutdown(SSL_SHUTDOWN_MODE_SENT) flush_data_down_to_socket() # maybe automatic/implied (OpenSSL users with custom BIO layers should be aware of this step) shutdown(socket, SHUT_WR) # this is optional, TCP socket level shutdown set_socket_io_timeout(30_SECONDS) shutdown(SSL_SHUTDOWN_MODE_BOTH) # a good server would implement this step close()/unwrap() Now if your outbound data is CORKed and flushed, the flush points would cause all the SSL data from both the 'last sent data' and the 'send shutdown notify' to go out in the same TCP segment and arrive at the other end more or less together. Doing any of the above in a different order introduces some kind of inefficiency. shutdown(fd, SHUT_WR) are often used at the socket level to help the manage TIME_WAIT. The client has to wait for the QUIT response message anyway. With the above sequence there is no additional time delay or cost with both parties performing a SSL protocol shutdown at the same time. Despite the IO timeouts existing (to provide a safety net). If the client is talking to a buggy server the worst case scenario is that it receives the quit response but the server never does an SSL shutdown and the server doesn't close the socket connection. In this situation the client will have to wait for IO timeout, some clients in other software use blocking sockets and don't have a timeout so they end up hooked (forever). -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8108 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8108] test_ftplib fails with OpenSSL 0.9.8m
Darryl Miles darryl.mi...@darrylmiles.org added the comment: In order to build Python with a specific version of OpenSSL followed the CYGWIN instructions and edited Modules/Setup to make it read (note - I added -L$(SSL) into the linker options too, since by default on CentOS 5.4 i386 OpenSSL build in static library mode ala ../openssl-1.0.0/libssl.a) : SSL=../openssl-1.0.0 _ssl _ssl.c \ -DUSE_SSL -I$(SSL)/include -I$(SSL)/include/openssl \ -L$(SSL)/lib -L$(SSL) -lssl -lcrypto It is not clear to me what Python's goals are: * To be backward compatible, in which case I don't know your historical use of SSL_shutdown(). * To be a thin-layer (1:1) over OpenSSL, so that power users can harness the full potential of OpenSSL if they are willing to understand the finer points. * To provide a full-featured Python API. * To provide a Python API that is easy to use within the Python paradigm. These goals may not be convergent. -- nosy: +dlmiles Added file: http://bugs.python.org/file16838/python_ssl.c.txt ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8108 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8108] test_ftplib fails with OpenSSL 0.9.8m
Darryl Miles darryl.mi...@darrylmiles.org added the comment: I've updated my attachment to the bug, if you read the old one please re-read the updated version (since some points in there were not accurate). With regards to the OpenSSL error return -1/ERROR_SYSCALL with errno==0 being observed, I shall respond into the OpenSSL mailing list with a fuller response. The man page SSL_get_error(3) does explain what getting a zero error means in relation to end-of-file at the BIO/TCP socket level. In light of the presumption by me that the problem was because one end did a syscall close(fd) this makes perfect sense in the context of your observation and OpenSSL appears to be working as documented. There is also code to print out the error in Python at Modules/_ssl.c method PySSL_SetError() so I'm not sure of the source of the funny looking error printing in relation to the ftpcli test case, consider it to be an error message formatting glitch. Now the issue I see here is that there are clearly 3 use cases Python should provide: * one-shot raw mode (don't enter the loop at all, as per newssl5.patch/my attachment, this is more or less what you already have in CVS, but I would remove the 2nd call to SSL_shutdown(), raw mode means exactly that; the caller is in charge of calling it again, thin layer for Python power users) [case-1] * perform IO sleep/wait as necessary until we observe SSL_shutdown()==0 (or better! so this will return if 0 or 1 is returned) [case-2] * perform IO sleep/wait as necessary until we observe SSL_shutdown()==1 [case-3] I presume you already have a way of handling the configuration of I/O timeouts as per Python's IO processing model (that is provided by some other API mechanism). The question is what is the best way to provide them (what is inline with the Python paradigm?) : * one method, keep existing named method, add new optional argument that can indicate all 3 modes of operation. Debate which of the 3 modes of operation is the default when the argument is not specified, case-1 seems to most backwardly compatible. [I am presuming python supports optional arguments] * new method, keep existing as-is (to cover case 1), implement case-2 and case-3 in the method which also take an argument for the user to specify which use case they want. From this a patch should be straight-forward. Then we can look to see if the FTP client or server is doing anything wrong in light of having the building blocks in place to achieve any goal on top of OpenSSL. -- Added file: http://bugs.python.org/file16845/python_ssl_v2.c.txt ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8108 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8108] test_ftplib fails with OpenSSL 0.9.8m
Changes by Darryl Miles darryl.mi...@darrylmiles.org: Removed file: http://bugs.python.org/file16838/python_ssl.c.txt ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8108 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
Re: Timestamps for TCP packets?
On Oct 2, 2009, at 12:03 AM, Thomas Johnson wrote: Is there any way to get kernel-level timestamps for TCP packets while still using the standard python sockets library for communication? I need to communicate over a TCP connection as easily as possible, but also record the timestamps of the incoming and outgoing timestamps at microsecond or nanosecond resolution. The sockets library is of course great for the communication, and I've seen some python libraries that do packet sniffing and record timestamps, but it's not clear that I can do both at the same time. Have you tried it? I don't know of any reason that using sockets and doing a packet capture would interfere with each other. What are you trying to accomplish with the packet sniffing, though? -Miles -- http://mail.python.org/mailman/listinfo/python-list
Re: weak reference to bound method
On Oct 2, 2009, at 1:54 AM, Ole Streicher wrote: I am trying to use a weak reference to a bound method: class MyClass(object): def myfunc(self): pass o = MyClass() print o.myfunc bound method MyClass.myfunc of __main__.MyClass object at 0xc675d0 import weakref r = weakref.ref(o.myfunc) print r() None This is what I do not understand. The object o is still alive, and therefore the bound method o.myfunc shall exists. Like Peter said, bound methods are created on demand when they are obtained from the instance, not when the instance is created. Why does the weak reference claim that it is removed? And how can I hold the reference to the method until the object is removed? You could combine unbound methods with a weakref to the object: r = weakref.ref(o) MyClass.myfunc(r()) You could also create a wrapper object that holds a weak reference to the instance and creates a bound method on demand: class WeakMethod(object): def __init__(self, bound_method): self.im_func = bound_method.im_func self.im_self = weakref.ref(bound_method.im_self) self.im_class = bound_method.im_class def __call__(self): obj = self.im_self() if obj is None: return None return types.MethodType(self.im_func, obj, self.im_class) # could alternately act like a callableproxy -Miles -- http://mail.python.org/mailman/listinfo/python-list
Re: Are min() and max() thread-safe?
On Sep 16, 2009, at 10:39 PM, Steven D'Aprano wrote: On Wed, 16 Sep 2009 22:08:40 -0700, Miles Kaufmann wrote: On Sep 16, 2009, at 9:33 PM, Steven D'Aprano wrote: I have two threads, one running min() and the other running max() over the same list. I'm getting some mysterious results which I'm having trouble debugging. Are min() and max() thread-safe, or am I doing something fundamentally silly by having them walk over the same list simultaneously? min() and max() don't release the GIL, so yes, they are safe, and shouldn't see a list in an inconsistent state (with regard to the Python interpreter, but not necessarily to your application). But a threaded approach is somewhat silly, since the GIL ensures that they *won't* walk over the same list simultaneously (two separate lists, for that matter). Perhaps that's true for list contents which are built-ins like ints, but with custom objects, I can demonstrate that the two threads operate simultaneously at least sometimes. Unless I'm misinterpreting what I'm seeing. Whoops, sorry. Yes, if you use Python functions (or C functions that release the GIL) for the object comparison methods, a custom key function, or the sequence iterator's methods, then the the min()/max() calls could overlap between threads. If you have additional threads that could modify the list, you should synchronize access to it; if any of the earlier-mentioned functions modify the list, you're likely to get mysterious (or at least potentially unexpected) results even in a single-threaded context. On Sep 16, 2009, at 10:41 PM, Niklas Norrthon wrote: For one time sequences like files and generators your code is broken for obvious reasons. s/sequence/iterable/ -Miles -- http://mail.python.org/mailman/listinfo/python-list
Re: Are min() and max() thread-safe?
On Sep 16, 2009, at 9:33 PM, Steven D'Aprano wrote: I have two threads, one running min() and the other running max() over the same list. I'm getting some mysterious results which I'm having trouble debugging. Are min() and max() thread-safe, or am I doing something fundamentally silly by having them walk over the same list simultaneously? See for yourself: http://svn.python.org/view/python/trunk/Python/bltinmodule.c?view=markup min() and max() don't release the GIL, so yes, they are safe, and shouldn't see a list in an inconsistent state (with regard to the Python interpreter, but not necessarily to your application). But a threaded approach is somewhat silly, since the GIL ensures that they *won't* walk over the same list simultaneously (two separate lists, for that matter). -Miles -- http://mail.python.org/mailman/listinfo/python-list
Re: (A Possible Solution) Re: preferred way to set encoding for print
On Sep 16, 2009, at 12:39 PM, ~flow wrote: so: how can i tell python, in a configuration or using a setting in sitecustomize.py, or similar, to use utf-8 as a default encoding? [snip Stdout_writer_with_ncrs solution] This should work: sys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding=sys.stdout.encoding, errors='xmlcharrefreplace') http://mail.python.org/pipermail/python-list/2009-August/725100.html -Miles -- http://mail.python.org/mailman/listinfo/python-list
Re: Why indentation is use to denote block of code?
On Sep 13, 2009, at 5:38 PM, AggieDan04 wrote: On Sep 13, 6:27 pm, Steven D'Aprano wrote: On Sun, 13 Sep 2009 15:15:40 -0700, Chris Rebert wrote: In fact it's pretty much impossible to automatically indent Python code that has had its indentation removed; it's impossible to know for sure where the dedents should occur. Just like most other syntactic elements -- if you remove all the return statements from Python code, or dot operators, it's impossible to automatically add them back in. The only difference is that some (badly written?) applications mangle leading whitespace, but very few feel free to remove other text on a whim. I don't recall actually using a mail client or newsreader that removes leading whitespace when posting, but I've occasionally seen posts from others with all indentation removed, so presumably such badly-behaved applications do exist. I haven't seen it in a mail client, but it's very common in internet forums. If you regularly deal with some sort of transport that messes with your leading whitespace, you may find Tools/scripts/pindent.py in the Python source distribution useful; it adds comments that act as block closers to your code, and can then use those comments to restore the correct indentation to a mangled version. (Most forums offer some sort of whitespace-preserving [code] tag, though; and pindent is relatively old, and apparently not well maintained (no support for with blocks)). -Miles -- http://mail.python.org/mailman/listinfo/python-list
Re: list as an instance attribute
On Sep 14, 2009, at 1:55 AM, Robin Becker wrote: Bruno Desthuilliers wrote: pep08 : class names should be Capitalized. Also, if you're using Python 2.x, make it: class Primitive(object): #... ... I find it remarkable that the most primitive classes appear to break the pep08 convention eg object, str, list etc etc. In fact all such conventions appear to be broken more often than not. So the rule appears to be create a convention and then break it :) More like break a convention and then create it. :) Before Python 2.2, built-in types were not classes at all; they couldn't be instantiated directly (from Python code), so you had to call the str() function to create an object of type string. I think there may have been some discussion of renaming the built-ins to match PEP 8 for Python 3, but if so I doubt it got very far. -Miles -- http://mail.python.org/mailman/listinfo/python-list
[issue2320] Race condition in subprocess using stdin
Changes by Chris Miles miles.ch...@gmail.com: -- nosy: +chrismiles ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2320 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue5468] urlencode does not handle bytes, and could easily handle alternate encodings
Changes by Miles Kaufmann mile...@umich.edu: Removed file: http://bugs.python.org/file14796/urllib_parse.py3k.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue5468 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
Re: Why does this group have so much spam?
casebash walkr...@gmail.com wrote in message news:7294bf8b-9819-4b6d-92b2- afc1c8042...@x6g2000prc.googlegroups.com... So much of it could be removed even by simple keyword filtering. Funny, I was just thinking recently about how *little* spam this list gets--on the other hand, I'm following it via the python-list@ mailing list. The list owners do a great job of keeping the level of spam at a minimum, though there are occasional false positives (like your post, apparently, since I'm only seeing the replies). -Miles -- http://mail.python.org/mailman/listinfo/python-list
[issue5468] urlencode does not handle bytes, and could easily handle alternate encodings
Miles Kaufmann mile...@umich.edu added the comment: I've attached a patch that provides similar functionality to Dan Mahn's urlencode(), as well as providing encoding and errors parameters to parse_qs and parse_qsl, updating the documentation to reflect the added parameters, and adding test cases. The implementation of urlencode() is not the same as dmahn's, and has a more straightforward control flow and less code duplication than the current implementation. (For the tests, I tried to match the style of the file I was adding to with regard to (expect, result) order, which is why it's inconsistent.) -- keywords: +patch versions: +Python 3.2 Added file: http://bugs.python.org/file14796/urllib_parse.py3k.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue5468 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
Re: Need help with Python scoping rules
On Aug 26, 2009, at 1:11 PM, kj wrote: I think I understand the answers well enough. What I *really* don't understand is why this particular feature of Python (i.e. that functions defined within a class statement are forbidden from seeing other identifiers defined within the class statement) is generally considered to be perfectly OK. IMO it's a bizarre, inexplicable blindspot (which, among other things, gives rise to a certain worry about what other similar craziness lurks under Python's image of rationality). I have never seen even a half-hearted justification, from a language design point of view, for why this particular feature is worth having. Guido's design justifications: http://mail.python.org/pipermail/python-dev/2000-November/010598.html -- My personal justification: Python has used the same basic method of class creation since the very beginning: create a new local namespace, execute the class suite in that namespace, and then create a class, using the contents of the namespace as the class attributes. The important thing to note here is that there are really *two* namespaces--the local namespace that exists while the class suite is being executed (what I call the suite namespace), and the namespace of the class itself--and the first ceases to exist when the second is created. The two namespaces generally contain the same names at the point that the transfer occurs, but they don't have to; the metaclass (which constructs the class) is free to mess with the dictionary of attributes before creating the class. Suppose for a moment that the suite namespace *were* visible to nested scopes. The simplest and most consistent implementation would be to have a closure generated by a class statement be similar to that generated by a function--i.e., the closure would be over the suite namespace. This hardly seems desirable, though, because the suite namespace and the class namespace would get out of sync when different objects were assigned to the class namespace: class C: x = 1 def foo(self): print x print self.x o = C() o.foo() 1 1 o.x = 2 o.foo() 1 2 Surely such an implementation would be considered an even larger Python wart then not having the suite namespace visible to nested scopes at all. But it's better than the alternative of trying to unify the class suite namespace and the class namespace, which would be a nightmare of special cases (adding/deleting class attributes? descriptors? __getattr__?) and require an implementation completely separate from that of normal nested scopes. -Miles P.S. Just for fun: import types def make_class(*bases): Decorator to allow you to (ab)use a function as a class definition. The function must take no arguments and end with 'return locals()'; bases are (optionally) specified as arguments to make_class; metaclasses other than 'type' are not supported. @make_class ... def C(): ... greeting = 'Hello' ... target = 'world' ... def greet(self): ... print '%s, %s' % (self.greeting, target) ... return locals() ... C().greet() Hello, world def decorator(func): return type(func.func_name, bases, func()) if len(bases) == 1 and isinstance(bases[0], types.FunctionType): func = bases[0] bases = (object,) return decorator(func) if not bases: bases = (object,) return decorator -- http://mail.python.org/mailman/listinfo/python-list
Re: Need help with Python scoping rules
On Aug 27, 2009, at 4:49 PM, kj wrote: Miles Kaufmann mile...@umich.edu writes: Guido's design justifications: http://mail.python.org/pipermail/python-dev/2000-November/010598.html Ah! Clarity! Thanks! How did you find this? Did you know of this post already? Or is there some special way to search Guido's design justifications? I just checked the python-dev archives around the time that PEP 227 was written. ...because the suite namespace and the class namespace would get out of sync when different objects were assigned to the class namespace: class C: x = 1 def foo(self): print x print self.x o = C() o.foo() 1 1 o.x = 2 o.foo() 1 2 But this unfortunate situation is already possible, because one can already define class C: x = 1 def foo(self): print C.x print self.x which would lead to exactly the same thing. You're right, of course. If I had been thinking properly, I would have posted this: ... the suite namespace and the class namespace would get out of sync when different objects were assigned to the class namespace: # In a hypothetical Python with nested class suite scoping: class C: x = 1 @classmethod def foo(cls): print x print cls.x C.foo() 1 1 C.x = 2 C.foo() 1 2 With your example, the result is at least easily explainable: self.x is originally 1 because the object namespace inherits from the class namespace, but running 'o.x = 2' rebinds 'x' in the object namespace (without affecting the class namespace). It's a distinction that sometimes trips up newbies (and me, apparently ;) ), but it's straightforward to comprehend once explained. But the distinction between the class suite namespace and the class namespace is far more subtle; extending the lifetime of the first so that it still exists after the second is created is, IMO, asking for trouble (and trying to unify the two double so). -Miles -- http://mail.python.org/mailman/listinfo/python-list
Re: Waiting for a subprocess to exit
On Aug 20, 2009, at 10:13 PM, Ben Finney wrote: The module documentation has a section on replacing ‘os.system’ http://docs.python.org/library/subprocess#replacing-os-system, which says to use:: process = subprocess.Popen(mycmd + myarg, shell=True) status = os.waitpid(process.pid, 0) But a ‘Popen’ instance has its own ‘wait’ method, which waits for exit URL:http://docs.python.org/library/subprocess#subprocess.Popen.wait. Why would I use ‘os.waitpid’ instead of:: process = subprocess.Popen(mycmd + myarg, shell=True) process.wait() status = process.returncode Really, you can just use: process = subprocess.Popen(mycmd + myarg, shell=True) status = process.wait() I'm not sure why the documentation suggests using os.waitpid. I would recommend avoiding shell=True whenever possible. It's used in the examples, I suspect, to ease the transition from the functions being replaced, but all it takes is for a filename or some other input to unexpectedly contain whitespace or a metacharacter and your script will stop working--or worse, do damage (cf. the iTunes 2 installer debacle[1]). Leaving shell=False makes scripts more secure and robust; besides, when I'm putting together a command and its arguments, it's as convenient to build a list (['mycmd', 'myarg']) as it is a string (if not more so). -Miles [1]: http://apple.slashdot.org/article.pl?sid=01/11/04/0412209#comment_2518563 -- http://mail.python.org/mailman/listinfo/python-list
Re: Object Reference question
On Aug 20, 2009, at 11:07 PM, josef wrote: To begin, I'm new with python. I've read a few discussions about object references and I think I understand them. To be clear, Python uses a Pass By Object Reference model. x = 1 x becomes the object reference, while an object is created with the type 'int', value 1, and identifier (id(x)). Doing this with a class, x = myclass(), does the same thing, but with more or less object attributes. Every object has a type and an identifier (id()), according to the Python Language Reference for 2.6.2 section 3.1. x in both cases is the object reference. I would like to use the object to refer to the object reference. Stop right there. 'x' is not *the* object reference. It is *an* object reference (or in my preferred terminology, a label). Suppose you do: x = myclass() y = x The labels 'x' and 'y' both refer to the same object with equal precedence. There is no mapping from object back to label; it is a one-way pointer. Also importantly, labels themselves are not objects, and cannot be accessed or referred to. (This is a slight oversimplification; thanks to Python's reflection and introspection capabilities, it is possible to access labels to some extent, and in some limited situations it is possible to use stack inspection to obtain a label for an object. But this is hackish and error-prone, and should never be used when a more Pythonic method is available.) The following is what I would like to do: I have a list of class instances dk = [ a, b, c, d ], where a, b, c, d is an object reference. Entering dk gives me the object: [MyClass0 instance at 0x, MyClass1 instance at 0x0008, MyClass2 instance at 0x0010 ... ] I need the object reference name (a,b,c,d) from dk to use as input for a file. It sounds like you should either be storing that name as an attribute of the object, or using a dictionary ({'a': a, 'b': b, ...}). -Miles -- http://mail.python.org/mailman/listinfo/python-list
Re: Skipping a superclass
On Aug 2, 2009, at 5:36 AM, Steven D'Aprano wrote: I have a series of subclasses like this: class A(object): def method(self, *args): print Lots of work gets done here in the base class class B(A): def method(self, *args): print A little bit of work gets done in B super(B, self).method(*args) class C(B): def method(self, *args): print A little bit of work gets done in C super(C, self).method(*args) However, the work done in C.method() makes the work done in B.method() obsolete: I want one to run, or the other, but not both. C does need to inherit from B, for the sake of the other methods, so I want C.method() *only* to skip B while still inheriting from A. (All other methods have to inherit from B as normal.) This might not be applicable to the larger problem you're trying to solve, but for this sample, I would write it as: class A(object): def method(self, *args): self._method(*args) print Lots of work gets done here in the base class def _method(self, *args): pass # or perhaps raise NotImplemented class B(A): def _method(self, *args): print A little bit of work gets done in B class C(B): def _method(self, *args): print A little bit of work gets done in C So what I have done is change the call to super in C to super(B, self) instead of super(C, self). It seems to work, but is this safe to do? Or are there strange side-effects I haven't seen yet? In a diamond-inheritance situation, you may end up skipping methods besides just B.method(). -Miles -- http://mail.python.org/mailman/listinfo/python-list
Re: python 3 and stringio.seek
On Jul 28, 2009, at 6:30 AM, Michele Petrazzo wrote: Hi list, I'm trying to port a my library to python 3, but I have a problem with a the stringio.seek: the method not accept anymore a value like pos=-6 mode=1, but the old (2.X) version yes... The error: File /home/devel/Py3/lib/python3.0/io.py, line 2031, in seek return self._seek(pos, whence) IOError: Can't do nonzero cur-relative seeks How solve this? In Python 2, StringIO is a stream of bytes (non-Unicode characters). In Python 3, StringIO is a stream of text (Unicode characters). In the early development of Python 3 (and 3.1's _pyio), it was implemented as a TextIOWrapper over a BytesIO buffer. TextIOWrapper does not support relative seeks because it is difficult to map the concept of a current position between bytes and the text that it encodes, especially with variable-width encodings and other considerations. Furthermore, the value returned from TextIOWrapper.tell isn't just a file position but a cookie that contains other data necessary to restore the decoding mechanism to the same state. However, for the default encoding (utf-8), the current position is equivalent to that of the underlying bytes buffer. In Python 3, StringIO is implemented using an internal buffer of Unicode characters. There is no technical reason why it can't support relative seeks; I assume it does not for compatibility with the original Python TextIOWrapper implementation (which is present in 3.1's _pyio, but not in 3.0). Note that because of the different implementations, StringIO.tell() (and seek) behaves differently for the C and Python implementations: $ python3.1 import io, _pyio s = io.StringIO('\u263A'); s.read(1), s.tell() ('☺', 1) s = _pyio.StringIO('\u263A'); s.read(1), s.tell() ('☺', 3) The end result seems to be that, for text streams (including StreamIO), you *should* treat the value returned by tell() as an opaque magic cookie, and *only* pass values to seek() that you have obtained from a previous tell() call. However, in practice, it appears that you *may* seek StringIO objects relatively by characters using s.seek(s.tell() + n), so long as you do not use the _pyio.StringIO implementation. If what you actually want is a stream of bytes, use BytesIO, which may be seeked (sought?) however you please. I'm basing this all on my reading of the Python source (and svn history), since it doesn't seem to be documented, so take it with a grain of salt. -Miles -- http://mail.python.org/mailman/listinfo/python-list
Re: trouble with minidom
On Jul 21, 2009, at 8:08 PM, Ronn Ross wrote: Hello I'm trying to read an xml file using minidome. The xml looks like: rootNode project namemyProj/name path/here//path /project /rootNode My code looks like so: from xml.dom.minidom import parse dom = parse(myfile.xml) for node in dom.getElementsByTagName(project'): print('name: %s, path: %s \n') % (node.childNodes[0].nodeValue, node.childNodes[1]) Unfortunately, it returns 'nodeValue as none. I'm trying to read the value out of the node fir example name: myProj. I haven't found much help in the documentation. Can someone point me in the right direction? Two issues: In your example XML file, the first child node of the project Element is the Text node containing the whitespace between the project and name tags. node.childNodes[0] will select that whitespace node. The nodeValue of an Element is null (None). In order to get the text contents of an element, you must get the nodeValue of the Text child node of the Element. Like Gabriel, I would recommend using an XML library with a more concise API than the W3C DOM (I like lxml.objectify). But if you stick with xml.dom, use the specification as a reference: http://www.w3.org/TR/REC-DOM-Level-1/ -- http://mail.python.org/mailman/listinfo/python-list
Re: Why not enforce four space indentations in version 3.x?
On Jul 15, 2009, at 4:26 PM, David Bolen wrote: Miles Kaufmann mile...@umich.edu writes: On Jul 14, 2009, at 5:06 PM, David Bolen wrote: Are you sure? It seems to restrict them in the same block, but not in the entire file. At least I was able to use both space and tab indented blocks in the same file with Python 3.0 and 3.1. It seems to me that, within an indented block, Python 3.1 requires that you are consistent in your use of indentation characters *for that indentation level*. For example, the following code seems to be allowed: Um, right - in other words, what I said :-) I wasn't trying to correct you, just being more explicit. :) After reading your post, I still wasn't sure if the restriction on mixing spaces and tabs applied to nested blocks--I was surprised that the code sample I included was allowed. -Miles -- http://mail.python.org/mailman/listinfo/python-list
Re: missing 'xor' Boolean operator
On Jul 15, 2009, at 1:43 PM, Jean-Michel Pichavant wrote: Hrvoje Niksic wrote: [snip] Note that in Python A or B is in fact not equivalent to not(not A and not B). l = [(True, True), (True, False), (False, True), (False, False)] for p in l: ... p[0] or p[1] [snip] Did I make twice the same obvious error ? Try again with: l = [('foo','bar'), ('foo', ''), ('', 'bar'), ('', '')] -Miles -- http://mail.python.org/mailman/listinfo/python-list
Re: missing 'xor' Boolean operator
On Jul 15, 2009, at 1:55 PM, Emile van Sebille wrote: On 7/15/2009 10:43 AM Jean-Michel Pichavant said... Hrvoje Niksic wrote: [snip] Note that in Python A or B is in fact not equivalent to not(not A and not B). Did I make twice the same obvious error ? No -- but in the not(not... example it doesn't short-circuit. No; like 'A or B', 'not (not A and not B)' does in fact short-circuit if A is True. (The 'and' condition does not have to evaluate the right operand when 'not A' is False.) -Miles -- http://mail.python.org/mailman/listinfo/python-list
Re: python first assignment of a global variable
On Jul 15, 2009, at 1:55 PM, Rodrigue wrote: Basically, I was very surprised to discover that e() raises an exception, but even more that e_raise() points to if not MY_GLOBAL Is the problem not really when I assign? My assumption is that some reordering is happening behind the scenes that creates a situation similar to the += which assigns hence expects to be at the local level. The determination of whether a name is a reference to a local or global variable is made at compile time. When a function contains a single assignment (or augmented assignment) to a name, the compiler generates bytecode such that all references to that name within the function will be looked up in the local scope only, including those before the assignment statement. -Miles -- http://mail.python.org/mailman/listinfo/python-list
Re: Why not enforce four space indentations in version 3.x?
On Jul 14, 2009, at 5:06 PM, David Bolen wrote: Are you sure? It seems to restrict them in the same block, but not in the entire file. At least I was able to use both space and tab indented blocks in the same file with Python 3.0 and 3.1. It seems to me that, within an indented block, Python 3.1 requires that you are consistent in your use of indentation characters *for that indentation level*. For example, the following code seems to be allowed: def foo(): TABif True: TABSPSPx = 1 TABelse: TABTABx = 2 TABreturn x But replacing any of the first tabs on each line with 8 spaces (without replacing them all), which previously would have been allowed, is now an error. -Miles -- http://mail.python.org/mailman/listinfo/python-list
Re: missing 'xor' Boolean operator
On Jul 15, 2009, at 12:07 AM, Dr. Phillip M. Feldman wrote: I appreciate the effort that people have made, but I'm not impressed with any of the answers. For one thing, xor should be able to accept an arbitrary number of input arguments (not just two) You originally proposed this in the context of the existing short- circuit boolean operators. Those operators (being infix) take only two operands. and should return True if and only if the number of input arguments that evaluate to True is odd The existing and/or operators always return the value of one of the operands--not necessarily True or False--which is another important property, but one that can't be translated fully to xor. Given the lack of context in your original post, hopefully you'll forgive me being unimpressed by your not being impressed. :) Here's my code: def xor(*args): xor accepts an arbitrary number of input arguments, returning True if and only if bool() evaluates to True for an odd number of the input arguments. result= False for arg in args: if bool(arg): result= not result return result If all you want is a True or False result, I'd write it like this: import operator def xor(*args): return reduce(operator.xor, map(bool, args)) # or imap In order to make it act more like the other logical operators, I'd use MRAB's 2-argument xor as the reducer--though I can't really see the utility. def xor2(a, b): return (not b and a) or (not a and b) def xor(*args): return reduce(xor2, args) You may also find this question interesting: http://stackoverflow.com/questions/432842/ -Miles -- http://mail.python.org/mailman/listinfo/python-list
Re: Catching control-C
On Jul 9, 2009, at 9:20 AM, Lie Ryan wrote: Michael Mossey wrote: I want to understand better what the secret is to responding to a ctrl-C in any shape or form. Are you asking: when would the python interpreter process KeyboardInterrupt? ... In single threaded python program, the currently running thread is always the main thread (which can handle KeyboardInterrupt). I believe SIGINT is checked at every ticks. But SIGINT cannot interrupt atomic operations (i.e. it cannot interrupt long operations that takes a single tick). Some otherwise atomic single-bytecode operations (like large integer arithmetic) do manual checks for whether signals were raised (though that won't help at all if the operation isn't on the main thread). I believe a tick in python is equivalent to a single bytecode, but please correct me if I'm wrong. Not all opcodes qualify as a tick. In general, those opcodes that cause control to remain in the eval loop (and not make calls to other Python or C functions) don't qualify as ticks (but there are exceptions, e.g. so that while True: pass is interruptible). In Python/ceval.c: PyEval_EvalFrameEx(), those opcodes that don't end in goto fast_next_opcode are ticks. Please correct me if _I'm_ wrong! :) -Miles -- http://mail.python.org/mailman/listinfo/python-list
Re: function local namespace question
On Jul 8, 2009, at 1:35 PM, Paul LaFollette wrote: I cannot figure out any way to get a hook into the local namespace of a user defined function. I have tried making a wrapper class that grabs the function call and then uses exec to invoke myfunction.__code__ with my own dictionaries. This runs the (no argument) function properly (losing the return value, but I can deal with that) but never accesses the local logging dictionary that I specify in the exec() call. Perhaps the local namespace of a function is not a dict at all? Right. Within functions, local variable name accesses and assignments are compiled into bytecode instructions (LOAD_FAST, STORE_FAST) that manipulate pointers to objects by indexing directly into a C array in the frame. The locals dictionary of a frame is generated on-demand from that array when accessed. There is no way that I'm aware of to directly hook into function namespace access and manipulation. -Miles -- http://mail.python.org/mailman/listinfo/python-list
Re: Passing parameters for a C program in Linux.
On Jun 30, 2009, at 6:46 AM, venutaurus...@gmail.com wrote: I have to write an automted script which will test my c program. That program when run will ask for the commands. Keep in mind that, if your test script checks the program's output before giving it input, you can run into problems with buffering. The standard C library uses line-based buffering when a program is using a terminal for output, but when it's outputting to a pipe it uses block buffering. This can be a problem when running a process using subprocess--your program will buffer the prompt, and your test script won't see it, so the test will deadlock. The problem can also exist in the opposite direction. Possible solutions: - Explicitly set both your test script and your program to have line- buffered output. - Add a flush statement whenever you finish writing output and expect input. - Use pexpect, which uses a pseudo-tty and will make C stdio default to line buffering. - Use pdpi's solution, which, since it doesn't wait for a prompt before supplying input, doesn't have this issue. -- http://mail.python.org/mailman/listinfo/python-list
Re: Regular Expression Non Capturing Grouping Does Not Work.
On Jun 27, 2009, at 3:28 AM, Virtual Buddha wrote: Hello all, I am having some difficulties with the non-capturing grouping in python regular expression module. Even the code from the online documentation (http://docs.python.org/ howto/regex.html#non-capturing-and-named-groups) does not seem to work. ... Notice that you are calling .group() on the match object instead of .groups(). Without any arguments, .group() is equivalent to .group(0), which means return the entire matching string. http://docs.python.org/library/re.html#re.MatchObject.group -Miles -- http://mail.python.org/mailman/listinfo/python-list
Re: No trees in the stdlib?
João Valverde wrote: To answer the question of what I need the BSTs for, without getting into too many boring details it is to merge and sort IP blocklists, that is, large datasets of ranges in the form of (IP address, IP address, string). Originally I was also serializing them in a binary format (but no more after a redesign). I kept the merge and sort part as a helper script, but that is considerably simpler to implement. ... As an anecdotal data point (honestly not trying to raise the Python is slow strawman), I implemented the same algorithm in C and Python, using pyavl. Round numbers were 4 mins vs 4 seconds, against Python (plus pyavl). Even considering I'm a worse Python programmer than C programmer, it's a lot. I know many will probably think I tried to do C in Python but that's not the case, at least I don' t think so. Anyway like I said, not really relevant to this discussion. What format were you using to represent the IP addresses? (Is it a Python class?) And why wouldn't you use a network address/subnet mask pair to represent block ranges? (It seems like being able to represent ranges that don't fit into a subnet's 2^n block wouldn't be that common of an occurrence, and that it might be more useful to make those ranges easier to manipulate.) One of the major disadvantages of using a tree container is that usually multiple comparisons must be done for every tree operation. When that comparison involves a call into Python bytecode (for custom cmp/lt methods) the cost can be substantial. Compare that to Python's hash-based containers, which only need to call comparison methods in the event of hash collisions (and that's hash collisions, not hash table bucket collisions, since the containers cache each object's hash value). I would imagine that tree-based containers would only be worth using with objects with comparison methods implemented in C. Not that I'm trying to be an apologist, or reject your arguments; I can definitely see the use case for a well-implemented, fast tree- based container for Python. And so much the better if, when you need one, there was a clear consensus about what package to use (like PIL for image manipulation--it won't meet every need, and there are others out there, but it's usually the first recommended), rather than having to search out and evaluate a dozen different ones. -Miles -- http://mail.python.org/mailman/listinfo/python-list
Re: No trees in the stdlib?
On Jun 26, 2009, at 2:23 AM, Chris Rebert wrote: On Thu, Jun 25, 2009 at 10:55 PM, João Valverdebacku...@netcabo.pt wrote: Aahz wrote: In article mailman.2139.1245994218.8015.python-l...@python.org, Tom Reed tomree...@gmail.com wrote: Why no trees in the standard library, if not as a built in? I searched the archive but couldn't find a relevant discussion. Seems like a glaring omission considering the batteries included philosophy, particularly balanced binary search trees. No interest, no good implementations, something other reason? Seems like a good fit for the collections module. Can anyone shed some light? What do you want such a tree for? Why are dicts and the bisect module inadequate? Note that there are plenty of different tree implementations available from either PyPI or the Cookbook. Simple example usage case: Insert string into data structure in sorted order if it doesn't exist, else retrieve it. That's pretty much the bisect module in a nutshell. It manipulates a sorted list using binary search. With O(n) insertions and removals, though. A decent self-balancing binary tree will generally do those in O(log n). -Miles -- http://mail.python.org/mailman/listinfo/python-list
Re: urllib2.urlopen issue
On Jun 24, 2009, at 2:59 PM, David wrote: On Jun 24, 11:27 am, Chris Rebert wrote: On Wed, Jun 24, 2009 at 10:50 AM, David wrote: hello, I have a url that is http://query.directrdr.com/ptrack? pid=225v_url=http:// www.plentyoffish.comkeyword=flowersfeed=1ip=12.2.2.2said=$said. If I open it on a browser, I can get its contents without any problem. However, if I use following code, import urllib2 url = 'http://query.directrdr.com/ptrack?pid=225v_url=http:// www.plentyoffish.comkeyword=flowersfeed=1ip=12.2.2.2said=$said' xml = urllib2.urlopen(url).read() then I get an exception of File /usr/lib/python2.5/urllib2.py, line 1082, in do_open raise URLError(err) urllib2.URLError: urlopen error (-2, 'Name or service not known') Unable to reproduce with either urllib or urllib2's urlopen(). I get some XML back without error both ways. Using Python 2.6.2 on Mac OS X. Thanks Aahz. And thanks Chris. The XML content is what I am looking for. I use Python 2.5. Maybe I should update to 2.6.2? Python version problem? No, it also works for me on Python 2.5.1. A wild guess: does this code work? import socket socket.gethostbyname(socket.gethostname()) If it throws a similar exception (Name or service not known), the root problem may be a misconfiguration in your /etc/hosts or /etc/ resolv.conf files. -Miles -- http://mail.python.org/mailman/listinfo/python-list
Re: [Mac] file copy
On Jun 23, 2009, at 9:52 AM, Tobias Weber wrote: Hi, which is the best way to copy files on OS X? I want to preserve resource forks and extended attributes. ... bin/cp -p This. cp -p, mv, rsync -E, tar, and other utilities will use the copyfile(3) API to preserve extended attributes, resource forks, and ACLs. cp -Rp should be just as safe as a Finder copy--moreso if you run it as root--with the exception of preserving creation dates. Or if you're worried about hard links, check out ditto(1). You presumably already know this, but avoid shutil at all costs. BackupBouncer (http://www.n8gray.org/code/backup-bouncer/) makes testing what gets preserved by various methods of copying quick and easy. The results for a Finder copy: Verifying:basic-permissions ... FAIL (Critical) Verifying: timestamps ... ok (Critical) Verifying: symlinks ... ok (Critical) Verifying:symlink-ownership ... FAIL Verifying:hardlinks ... FAIL (Important) Verifying: resource-forks ... Sub-test: on files ... ok (Critical) Sub-test: on hardlinked files ... FAIL (Important) Verifying: finder-flags ... ok (Critical) Verifying: finder-locks ... ok Verifying:creation-date ... ok Verifying:bsd-flags ... ok Verifying: extended-attrs ... Sub-test: on files ... ok (Important) Sub-test: on directories ... ok (Important) Sub-test: on symlinks ... ok Verifying: access-control-lists ... Sub-test: on files ... ok (Important) Sub-test: on dirs ... ok (Important) Verifying: fifo ... FAIL Verifying: devices ... FAIL Verifying: combo-tests ... Sub-test: xattrs + rsrc forks ... ok Sub-test: lots of metadata ... FAIL sudo cp -Rp: Verifying:basic-permissions ... ok (Critical) Verifying: timestamps ... ok (Critical) Verifying: symlinks ... ok (Critical) Verifying:symlink-ownership ... ok Verifying:hardlinks ... FAIL (Important) Verifying: resource-forks ... Sub-test: on files ... ok (Critical) Sub-test: on hardlinked files ... FAIL (Important) Verifying: finder-flags ... ok (Critical) Verifying: finder-locks ... ok Verifying:creation-date ... FAIL Verifying:bsd-flags ... ok Verifying: extended-attrs ... Sub-test: on files ... ok (Important) Sub-test: on directories ... ok (Important) Sub-test: on symlinks ... ok Verifying: access-control-lists ... Sub-test: on files ... ok (Important) Sub-test: on dirs ... ok (Important) Verifying: fifo ... ok Verifying: devices ... ok Verifying: combo-tests ... Sub-test: xattrs + rsrc forks ... ok Sub-test: lots of metadata ... ok -Miles -- http://mail.python.org/mailman/listinfo/python-list
Re: urllib2 content-type headers
On Jun 21, 2009, at 12:01 PM, TYR wrote: Unfortunately, I'm getting nothing but 400 Bad Requests. I suspect this is due to an unfeature of urllib2. Notably, although you can use urllib2.Request's add_header method to append a header, the documentation (http://docs.python.org/library/urllib2.html) says that: remember that a few standard headers (Content-Length, Content-Type and Host) are added when the Request is passed to urlopen() (or OpenerDirector.open()). And: Note that there cannot be more than one header with the same name, and later calls will overwrite previous calls in case the key collides. To put it another way, you cannot rely on Content-Type being correct because whatever you set it to explicitly, urllib2 will silently change it to something else which may be wrong, and there is no way to stop it. What happened to explicit is better than implicit? Those headers are added (by AbstractHTTPHandler.do_request_) only if they are missing. -Miles -- http://mail.python.org/mailman/listinfo/python-list
Re: generator expression works in shell, NameError in script
On Jun 19, 2009, at 8:45 AM, Bruno Desthuilliers wrote: class Foo(object): ... bar = ['a', 'b', 'c'] ... baaz = list((b, b) for b in bar) but it indeed looks like using bar.index *in a generator expression* fails (at least in 2.5.2) : class Foo(object): ... bar = ['a', 'b', 'c'] ... baaz = list((bar.index(b), b) for b in bar) ... Traceback (most recent call last): File stdin, line 1, in module File stdin, line 3, in Foo File stdin, line 3, in genexpr NameError: global name 'bar' is not defined The reason that the first one works but the second fails is clearer if you translate each generator expression to the approximately equivalent generator function: class Foo(object): bar = ['a', 'b', 'c'] def _gen(_0): for b in _0: yield (b, b) baaz = list(_gen(iter(bar)) # PEP 227: the name bindings that occur in the class block # are not visible to enclosed functions class Foo(object): bar = ['a', 'b', 'c'] def _gen(_0): for b in _0: yield (bar.index(b), b) baaz = list(_gen(iter(bar)) -Miles -- http://mail.python.org/mailman/listinfo/python-list
Re: class or instance method
On Jun 21, 2009, at 5:23 PM, Scott David Daniels wrote: Hrvoje Niksic wrote: ... class class_or_instance(object): def __init__(self, fn): self.fn = fn def __get__(self, obj, cls): if obj is not None: return lambda *args, **kwds: self.fn(obj, *args, **kwds) else: return lambda *args, **kwds: self.fn(cls, *args, **kwds) ... Just to polish a bit: import functools class ClassOrInstance(object): def __init__(self, fn): self._function = fn self._wrapper = functools.wraps(fn) def __get__(self, obj, cls): return self._wrapper(functools.partial(self._function, cls if obj is None else obj)) from types import MethodType class ClassOrInstance(object): def __init__(self, func): self._func = func def __get__(self, obj, cls): return MethodType(self._func, cls if obj is None else obj, cls) -Miles -- http://mail.python.org/mailman/listinfo/python-list
[issue6234] cgi.FieldStorage is broken when given POST data
Changes by Miles Kaufmann mile...@umich.edu: -- nosy: +milesck ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue6234 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue5468] urlencode does not handle bytes, and could easily handle alternate encodings
Miles Kaufmann mile...@umich.edu added the comment: parse_qs and parse_qsl should also grow encoding and errors parameters to pass to the underlying unquote(). -- nosy: +milesck ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue5468 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
Re: Function/method returning list of chars in string?
On Jun 9, 2009, at 6:05 AM, Diez B. Roggisch wrote: Also as list-comps are going away and are replaced by list(generator-expression) Where did you hear that? -Miles -- http://mail.python.org/mailman/listinfo/python-list
Re: random number including 1 - i.e. [0,1]
On Jun 9, 2009, at 7:05 PM, Mensanator wrote: On Jun 9, 4:33 pm, Esmail wrote: Hi, random.random() will generate a random value in the range [0, 1). Is there an easy way to generate random values in the range [0, 1]? I.e., including 1? I am implementing an algorithm and want to stay as true to the original design specifications as possible though I suppose the difference between the two max values might be minimal. I'm curious what algorithm calls for random numbers on a closed interval. ps: I'm confused by the docs for uniform(): random.uniform(a, b) Return a random floating point number N such that a = N = b for a = b That's wrong. Where did you get it? http://docs.python.org/library/random.html -Miles -- http://mail.python.org/mailman/listinfo/python-list
Re: Winter Madness - Passing Python objects as Strings
On Jun 4, 2009, at 3:25 AM, Hendrik van Rooyen wrote: A can is like a pickle, in that it is a string, but anything can be canned. Unlike a pickle, a can cannot leave the process, though, unless the object it points to lives in shared memory. If you have any interest, contact me and I will send you the source. Sounds like di(), which can be written: import _ctypes di = _ctypes.PyObj_FromPtr def can(o): return str(id(o)) def uncan(s): return di(int(s)) http://www.friday.com/bbum/2007/08/24/python-di/ -Miles -- http://mail.python.org/mailman/listinfo/python-list
Re: How to check all elements of a list are same or different
On Wed, Apr 15, 2009 at 8:48 PM, Paul Rubin wrote: I'd use: from operator import eq all_the_same = reduce(eq, mylist) That won't work for a sequence of more than two items, since after the first reduction, the reduced value that you're comparing to is the boolean result: reduce(eq, [0, 0, 0]) False reduce(eq, [0, 1, False]) True I'd use: # my preferred: def all_same(iterable): it = iter(iterable) first = it.next() return all(x == first for x in it) # or, for the pathologically anti-generator-expression crowd: from functools import partial from operator import eq from itertools import imap def all_same(iterable): it = iter(iterable) return all(imap(partial(eq, it.next()), it)) -Miles -- http://mail.python.org/mailman/listinfo/python-list
Re: video capture in Python ? (Tim Roberts)
Hi, You could try the python wrapper for OpenCV, here is the link: http://code.google.com/p/ctypes-opencv/ Regards Miles -- http://mail.python.org/mailman/listinfo/python-list
Re: Unsupported operand types in if/else list comprehension
On Fri, Apr 10, 2009 at 5:26 PM, Mike H wrote: Thanks to all of you. FYI, I'm doing this because I'm working on creating some insert statements in SQL, where string values need to be quoted, and integer values need to be unquoted. This is what you should have posted in the first place. Your solution is entirely the wrong one, because it will break if your input strings contain the quote character (and suffers from other issues as well)--this is where SQL injection vulnerabilities come from. The safe and correct way is to allow your database driver to insert the parameters into the SQL query for you; it will look something like this (though the exact details will vary depending on what module you're using): cursor.execute('INSERT INTO my_table VALUES (?, ?, ?)', ['test',1,'two']) -Miles -- http://mail.python.org/mailman/listinfo/python-list
Re: Adding a Badge to an Icon in Mac OS X
On Fri, Apr 10, 2009 at 5:22 PM, bingo wrote: PyObjc seems to offer the option to add badges to icons in the doc. I need to add badges to any icon... kinda like SCPlugin and dropbox do. I think that SCPlugin is doing it through carbon Icon Services. But I am still trying to figure out how it is done! I believe those programs are able to do so because they are Finder plugins--it's not something that a separate program could do. This isn't really a Python question, though; you'd probably have better luck finding answers on a OS X-related list. -Miles -- http://mail.python.org/mailman/listinfo/python-list
Re: Why does Python show the whole array?
On Thu, Apr 9, 2009 at 2:59 AM, Peter Otten wrote: Lawrence D'Oliveiro wrote: This is why conditional constructs should not accept any values other than True and False. So you think if test.find(item) == True: ... would have been better? Clearly, any comparison with a boolean literal should be illegal. ;) -Miles P.S. ... really, though. -- http://mail.python.org/mailman/listinfo/python-list
Re: more fun with iterators (mux, demux)
On Wed, Apr 8, 2009 at 1:21 PM, pataphor wrote: On Wed, 08 Apr 2009 10:51:19 -0400 Neal Becker wrote: What was wrong with this one? def demux(iterable, n): return tuple(islice(it, i, None, n) for (i, it) in enumerate(tee(iterable, n))) Nothing much, I only noticed after posting that this one handles infinite sequences too. For smallish values of n it is acceptable. I assume that smallish values of n refers to the fact that itertools.tee places items into every generator's internal deque, which islice then skips over, whereas your version places items only into the deque of the generator that needs it. However, for small n, the tee-based solution has the advantage of having most of the work done in C instead of in Python generator functions; in my limited benchmarking, the point where your version becomes faster is somewhere around n=65. -Miles -- http://mail.python.org/mailman/listinfo/python-list
Re: is there a rwlock implementation in python library?
On Wed, Apr 8, 2009 at 11:10 PM, Ken wrote: I need a read-write-lock, is there already an implementation in the standard library? No, but there are several recipes on ActiveState: http://code.activestate.com/recipes/413393/ http://code.activestate.com/recipes/502283/ http://code.activestate.com/recipes/465156/ -Miles -- http://mail.python.org/mailman/listinfo/python-list
Re: more fun with iterators (mux, demux)
On Mon, Apr 6, 2009 at 8:05 PM, Neal Becker wrote: I'm trying to make a multiplexor and demultiplexor, using generators. The multiplexor will multiplex N sequences - 1 sequence (assume equal length). The demultiplexor will do the inverse. The demux has me stumped. The demux should return a tuple of N generators. from itertools import islice, tee def demux(iterable, n): return tuple(islice(it, i, None, n) for (i, it) in enumerate(tee(iterable, n))) -Miles -- http://mail.python.org/mailman/listinfo/python-list