Re: Python vs. C++11
On Feb 13, 4:21 am, sturlamolden wrote: > There are bigsimilarities between Python and the new C++ standard. Now > we can actually use our experience as Python programmers to write > fantastic C++ :-) And of course the keyword 'auto', which means automatic type interence. -- http://mail.python.org/mailman/listinfo/python-list
Python vs. C++11
There are bigsimilarities between Python and the new C++ standard. Now we can actually use our experience as Python programmers to write fantastic C++ :-) Here is a small list of similarities to consider: Iterate over any container, like Python's for loop: for (type& item: container) Pointer type with reference counting: std::shared_ptr Python-like datatypes: tuple std::tuple list std::vector std::list std::stack dict std::unordered_map set std::unordered_set complex std::complex deque std::deque lambda[name](params){body} heapq std::heap weakref weak_ptr str std::string -- unicode, raw strings, etc work as Python Other things of interest: std::regex, std::cmatch std::thread thread api versy similar to Python's std::atomic datatype for atomic operations std::mt19937 same prng as Python -- http://mail.python.org/mailman/listinfo/python-list
Re: Data Plotting Library Dislin 10.2
On 18 Nov, 22:16, Tony the Tiger wrote: > Ya, but apparently no source unless you dig deep into your pockets. > Really, why would we need this when we already have gnuplot? > Just wondering... Dislin is a very nice plotting library for scientific data, particularly for scientists and engineers using Fortran (which, incidentally, is quite a few). For Python, we have Matplotlib as well (at least for 2D plots). Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: Complex sort on big files
On Aug 1, 5:33 pm, aliman wrote: > I've read the recipe at [1] and understand that the way to sort a > large file is to break it into chunks, sort each chunk and write > sorted chunks to disk, then use heapq.merge to combine the chunks as > you read them. Or just memory map the file (mmap.mmap) and do an inline .sort() on the bytearray (Python 3.2). With Python 2.7, use e.g. numpy.memmap instead. If the file is large, use 64-bit Python. You don't have to process the file in chunks as the operating system will take care of those details. Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: Complex sort on big files
On Aug 1, 5:33 pm, aliman wrote: > I understand that sorts are stable, so I could just repeat the whole > sort process once for each key in turn, but that would involve going > to and from disk once for each step in the sort, and I'm wondering if > there is a better way. I would consider using memory mapping the file and sorting it inline. Sorting a binary file of bytes with NumPy is as easy as this: import numpy as np f = np.memmap(filename, mode='rwb', dtype=np.uint8) f.sort(kind='quicksort') del f (You can define dtype for any C data type or struct.) If the file is really big, use 64-bit Python. With memory mapping you don't have to worry about processing the file in chunks, because the operating systems will take care of those details. I am not sure how to achieve this (inline file sort) with standard library mmap and timsort, so I'll leave that out. Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: I am fed up with Python GUI toolkits...
On 21 Jul, 16:52, Kevin Walzer wrote: > I bet that other scripting languages would > piggyback on top of it (Lua or Ruby bindings for Python's GUI toolkit, > anyone?) because doing that is less work than writing your own toolkit > from scratch. No doubt about that. Lua has a nice GUI toolkit by the way, which also has a C API. Currently it works on GTK+, Motif and Window. The C API is very small, only about 100 functions. So it makes a good candidate for a new Cython-based toolkit, even without piggybacking on Lua. http://www.tecgraf.puc-rio.br/iup/ -- http://mail.python.org/mailman/listinfo/python-list
Re: I am fed up with Python GUI toolkits...
On 22 Jul, 02:34, Gregory Ewing wrote: > I think that's a bit of an exaggeration -- there's only > one major dependency on each platform, and it's a very > widely used one (currently PyObjC/PyGTK/PyWin32). And > I'm thinking about ways to reduce the dependencies further, Pyrex or Cython? -- http://mail.python.org/mailman/listinfo/python-list
Re: I am fed up with Python GUI toolkits...
On 21 Jul, 00:52, Phlip wrote: > Oh, and you can TDD it, too... No, I can't TDD with Tkinter. All my tests fail when there is no OpenGL support (Togl is gone). For TDD to work, the tests must have a chance of passing. -- http://mail.python.org/mailman/listinfo/python-list
Re: I am fed up with Python GUI toolkits...
On 20 Jul, 22:58, Phlip wrote: > Tkinter sucks because it looks like an enfeebled Motif 1980s dawn-of- > GUIs scratchy window with grooves and lines everywhere. And using it with OpenGL has been impossible since Python 2.2 (or whatever). -- http://mail.python.org/mailman/listinfo/python-list
Re: I am fed up with Python GUI toolkits...
On 20 Jul, 22:58, Phlip wrote: > Tkinter sucks because it looks like an enfeebled Motif 1980s dawn-of- > GUIs scratchy window with grooves and lines everywhere. The widget set is limited compared to GTK or Qt, though it has the most common GUI controls, and it does not look that bad with the recent ttk styling (it actually doesn't look like Motif anymore). But it does not have a good GUI builder (that I know of). Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: I am fed up with Python GUI toolkits...
On 20 Jul, 06:28, Steven D'Aprano wrote: > Have you tried Tkinter version 8.0 or better, which offers a native look and > feel? Python 2.7.2 |EPD 7.1-1 (64-bit)| (default, Jul 3 2011, 15:34:33) [MSC v.1500 64 bit (AMD64)] on win32 Type "packages", "demo" or "enthought" for more information. >>> import Tkinter >>> Tkinter.TkVersion 8.5 -- http://mail.python.org/mailman/listinfo/python-list
Re: I am fed up with Python GUI toolkits...
On 20 Jul, 17:21, Thomas Jollans wrote: > Don't know about Mac, I was under the impression that GTK was fine on > Windows these days. GTK looks awful on Windows, requires a dozen of installers (non of which comes from a single source), is not properly stabile (nobody cares?), and does not work on 64-bit. -- http://mail.python.org/mailman/listinfo/python-list
Re: I am fed up with Python GUI toolkits...
On 20 Jul, 13:08, Tim Chase wrote: > http://xkcd.com/927/ > > :-) Indeed. -- http://mail.python.org/mailman/listinfo/python-list
Re: I am fed up with Python GUI toolkits...
On 20 Jul, 13:04, Adam Tauno Williams wrote: > > 3. Instances of extension types can clean themselves up on > > deallocation. No parent-child ownership model to mess things up. No > > manual clean-up. Python does all the reference counting we need. > > NEVER GOING TO HAPPEN. UI's don't work that way. They are inherently > hierarchical. Just get over it. Swing relies on the Java GC. Tkinter also does this correct. A hierarchy is nice for event processing an layout management, but not for memory mangement. C resources should be freed by Python calling tp_dealloc, not by the parent calling a .destroy() method on it's children. Python is not C++, so we have a method to automatically reclaim C resources. I don't want a toolkit to deallocate objects while Python still holds references to them (PyQt) or require a manual call to deallocate a widget tree (wxPython). Python knows when it's time to deallocate C resources, and then makes a call to the tp_dealloc member of the type object. -- http://mail.python.org/mailman/listinfo/python-list
Re: I am fed up with Python GUI toolkits...
On 20 Jul, 16:17, Mel wrote: > OTOH, if you intend to re-use the Dialog object, it's not a memory leak. It cannot be reused if you don't have any references pointing to it. Sure it is nice to have dialogs that can be hidden and re-displayed, but only those that can be accessed again. tp_dealloc should free any C resources the object is holding. There is no need to save anything beyond the call to tp_dealloc. Before the call to tp_dealloc any C resources should be kept, for the reason you mentioned. That is why the parent-child method of clean-up is at odds with the Python garbage collection. Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: I am fed up with Python GUI toolkits...
On 20 Jul, 11:59, Thomas Jollans wrote: > Okay, I haven't used SWT yet: manual memory management? Java is GC! > > It is perfectly reasonable to be required to manually call some sort of > destroy() method to tell the toolkit what you no longer want the user to > see: firstly, you have the display a reference to your window, in a > manner of speaking, by showing it. Secondly, in a GC environment like a > JVM or the CLI, it could take a moment. Was that what you meant? A .hide() method is warranted, but not a .destory() method to deallocate C resources. Python calls tp_dealloc when needed. -- http://mail.python.org/mailman/listinfo/python-list
Re: I am fed up with Python GUI toolkits...
On 20 Jul, 11:59, Thomas Jollans wrote: > I wonder - what do you think of GTK+? PyGTK with GLADE is the easier to use, but a bit awkward looking on Windows and Mac. (Not to mention the number of dependencies that must be installed, inclusing a GTK runtime.) > Really, while Swing and Tkinter are particularly bad as they draw their > own widgets GTK and Qt do that as well. > > The Eclipse SWT library does some of this for Java does some of this, > > though it also has flaws (e.g. manual memory management). A Python GUI > > toolkit could be partially based on the SWT code. > > Okay, I haven't used SWT yet: manual memory management? Java is GC! So is Python, yet wxPython require manual destruction of dialogs as well. > It is perfectly reasonable to be required to manually call some sort of > destroy() method to tell the toolkit what you no longer want the user to > see Yes, but not to avoid a memory leak. Sturla -- http://mail.python.org/mailman/listinfo/python-list
I am fed up with Python GUI toolkits...
What is wrong with them: 1. Designed for other languages, particularly C++, tcl and Java. 2. Bloatware. Qt and wxWidgets are C++ application frameworks. (Python has a standard library!) 3. Unpythonic memory management: Python references to deleted C++ objects (PyQt). Manual dialog destruction (wxPython). Parent-child ownership might be smart in C++, but in Python we have a garbage collector. 4. They might look bad (Tkinter, Swing with Jython). 5. All projects to write a Python GUI toolkit die before they are finished. (General lack of interest, bindings for Qt or wxWidgets bloatware are mature, momentum for web development etc.) How I would prefer the GUI library to be, if based on "native" widgets: 1. Lean and mean -- do nothing but GUI. No database API, networking API, threading API, etc. 2. Do as much processing in Python as possible. No more native code (C, C++, Cython) than needed. 3. Instances of extension types can clean themselves up on deallocation. No parent-child ownership model to mess things up. No manual clean-up. Python does all the reference counting we need. 4. No artist framework. Use OpenGL, Cairo, AGG or whatever else is suitable. 5. No particular GUI thread synchronization is needed -- Python has a GIL. 6. Expose the event loop to Python. 7. Preferably BSD-style license, not even LGPL. 8. Written for Python in Python -- not bindings for a C++ or tcl toolkit. The Eclipse SWT library does some of this for Java does some of this, though it also has flaws (e.g. manual memory management). A Python GUI toolkit could be partially based on the SWT code. Is it worth the hassle to start a new GUI toolkit project? Or should modern deskop apps be written with something completely different, such as HTML5? Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: subprocess & isql
On 15 Jul, 04:47, "peterff66" wrote: > I am working on a project that retrieves data from remote Sybase. I have to > use isql and subprocess to do this. Is the following correct? > I once had to hammer a nail, but I had to use a screw driver. Didn't work. > Did anybody do similar work before? Any ideas (or code piece) to share? Nobody has ever used Python to connect with an SQL database before. There is no Python database API, and no Sybase module. -- http://mail.python.org/mailman/listinfo/python-list
Re: What is the difference between PyPy and Python? are there lot of differences?
On 13 Jul, 16:06, ArrC wrote: > And they also talked about the lack of type check in python. > > So, how does it help (strongly typed) in debugging? Python is strongly typed. There are no static type checks in Python. Type checks are done at runtime. Dynamic typing does not mean that Python is a weakly typed language. The question of debugging is often raised, particularly by Java heads: In Python, the "doctest" and "unittest" modules can be used to verify that code works according to specification (e.g. trap type errors), and are common alternatives to static type checks. http://docs.python.org/release/3.2/library/doctest.html http://docs.python.org/release/3.2/library/unittest.html It is a good practice to always write tests for your code. Python 3.x also has function argument and return value type annotations, which is a further guard against type errors: http://www.python.org/dev/peps/pep-3107/ Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: How to write a file generator
On 12 Jul, 16:46, Billy Mays wrote: > I know the problem lies with the StopIteration, but I'm not sure how to > tell the caller that there are no more lines for now. Try 'yield None' instead of 'raise StopIteration'. Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: Python bug? Indexing to matrices
On 12 Jul, 14:59, sturlamolden wrote: > ma = np.matrix(a) > mb = np.matrix(b) > a*b ma*mb Sorry for the typo. -- http://mail.python.org/mailman/listinfo/python-list
Re: Python bug? Indexing to matrices
On 12 Jul, 07:39, David wrote: > Should the following line work for defining a matrix with zeros? > > c= [[0]*col]*row No. The rows will be aliased. This will work: c = [[0]*col for i in range(row)] Note that Python lists are not ment to be used as matrices. We have NumPy or the array module for that. > If this a valid way of initializing a matrix in Python 3.2.1, then it > appears to me that a bug surfaces in Python when performing this line: > > c[i][j] = c[i][j] + a[i][k] * b[k][j] > > It writes to the jth column rather than just the i,j cell. That is due to aliasing. > I'm new at Python and am not sure if I'm just doing something wrong if > there is really a bug in Python. The script works fine if I > initialize the matrix with numpy instead: > > c = np.zeros((row,col)) > > So, I know my matrix multiply algorithm is correct (I know I could use > numpy for matrix multiplication, this was just to learn Python). Like so: np.dot(a,b) or ma = np.matrix(a) mb = np.matrix(b) a*b or call BLAS directly: scipy.linalg.fblas.dgemm Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: Wgy isn't there a good RAD Gui tool fo python
On 12 Jul, 01:33, Dave Cook wrote: > I prefer spec-generators (almost all generate XML these days) like > QtDesigner to code-generators like Boa. I've only seen one good > argument for code generation, and that's to generate code for a layout > to "see how it's done". But code could always be generated > automatically from a spec. wxFormBuilder will produce C++, Python and XML. Pick the one you like! The advantage of using XML in tools like GLADE, QtCreator, and more recently Visual C#, is separation of layout and program logic. The problem with code generators like Visual C++ or Delphi was the mixing of generated and hand-written code. However, there is no real advantage over using XML instead of C++ or Python: C++ and Python code are also structured text. One structured text is as good as another: "There once was a man who had a problem. He said: 'I know, I will use XML.' Now he had two problems." When using wxFormBuilder, the generated .cpp, .h, .py or .xrc files are not to be edited. To write event handlers, we inherit from the generated classes. Thus, program view (generated code) and program control (hand-written code) are kept in separate source files. Because C++ and Python have multiple inheritance, we can even separate the program control into multiple classes. What we instantate is a class that inherit the designed dialog class (generated) and event handler classes (hand-written). Therefore, XML has no advantage over Python in the case of wxFormBuilder. XML just adds a second layer of complexity we don't need: I.e. not only must we write the same program logic, we must also write code to manage the XML resources. Hence, we are left with two problems instead of one. This is not special for wxFormBuilder: In many cases when working with Python (and to somewhat lesser extent C++), one is left to conclude that XML serves no real purpose. Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: Wgy isn't there a good RAD Gui tool fo python
On 12 Jul, 01:33, Dave Cook wrote: > I prefer spec-generators (almost all generate XML these days) like > QtDesigner to code-generators like Boa. I've only seen one good > argument for code generation, and that's to generate code for a layout > to "see how it's done". But code could always be generated > automatically from a spec. wxFormBuilder will produce C++, Python and XML. Pick the one you like! The advantage of using XML in tools like GLADE, QtCreator, and more recently Visual C#, is separation of layout and program logic. The problem with code generators like Visual C++ or Delphi was the mixing of generated and hand-written code. However, there is no real advantage over using XML instead of C++ or Python: C++ and Python code are also structured text. One structured text is as good as another: "There once was a man who had a problem. He said: 'I know, I will use XML.' Now he had two problems." When using wxFormBuilder, the generated .cpp, .h, .py or .xrc files are not to be edited. To write event handlers, we inherit from the generated classes. Thus, program view (generated code) and program control (hand-written code) are kept in separate source files. Because C++ and Python have multiple inheritance, we can even separate the program control into multiple classes. What we instantate is a class that inherit the designed dialog class (generated) and event handler classes (hand-written). Therefore, XML has no advantage over Python in the case of wxFormBuilder. XML just adds a second layer of complexity we don't need: I.e. not only must we write the same program logic, we must also write code to manage the XML resources. Hence, we are left with two problems instead of one. This is not special for wxFormBuilder: In many cases when working with Python (and to somewhat lesser extent C++), one is left to conclude that XML serves no real purpose. Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: perceptron feed forward neural networks in python
On 11 Jul, 20:47, Ken Watford wrote: > On Mon, Jul 11, 2011 at 2:31 PM, Igor Begić wrote: > > Hi, > > I,m new to Python and i want to study and write programs about perceptron > > feed forward neural networks in python. Does anyone have a good book or link > > for this? > > Try Stephen Marsland's "Machine Learning: An Algorithmic Perspective". > All example code is done in Python, and there's a chapter on > multilayer perceptrons. > > The code for the book is available online > here:http://www-ist.massey.ac.nz/smarsland/MLbook.html This is quite simple with tools like NumPy and SciPy. E.g. use numpy.dot (level-3 BLAS matrix multiply) for the forward pass, and scipy.optimize.leastsq (MINPACK Levenberg-Marquardt) for the training. Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: Wgy isn't there a good RAD Gui tool fo python
On 11 Jul, 22:35, Kevin Walzer wrote: > One reason there hasn't been much demand for a GUI builder is that, in > many cases, it's just as simpler or simpler to code a GUI by hand. Often a GUI builder is used as a bad replacement for sketch-pad and pencil. With layout managers (cf. wxWidgets, Qt, Swing, SWT, Tkinter) it is easier to "sketch and code" than with common MS Windows toolkits (eg. MFC, .NET Forms, Visual Basic, Delphi) which use absolute positioning and anchors. Using a GUI builder with layout managers might actually feel awkward. But with absolute positioning and anchors, there is no way to avoid a GUI builder. That said, we have good GUI builders for all the common Python GUI toolkits. Sometimes a mock-up GUI designer like DesignerVista might help. Yes, and actually hiring a graphical designer helps too. Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: Wgy isn't there a good RAD Gui tool fo python
On 11 Jul, 21:58, sturlamolden wrote: > http://wxformbuilder.org/ This Demo is using C++, it works the same with Python (wxPython code is generated similarly). http://zamestnanci.fai.utb.cz/~bliznak/screencast/wxfbtut1/wxFBTut1_controller.swf Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: Wgy isn't there a good RAD Gui tool fo python
On 11 Jul, 20:28, Ivan Kljaic wrote: > The ony worthly ones mentioning as an gui builder are boa constructor > fo wx, qtDesigner with the famous licence problems why companies do > not want to work with it, sharpdevelop for ironpython and netbeans for > jython. There is wxFormBuilder for wxPython, I suppose you've missed it. Of three GUI builders for wxPython (wxFormBuilder, wxGLADE, Boa Constructor), you managed to pick the lesser. The license for Qt is LGPL, the same as for wxWidgets. Both have LGPL Python bindings (PySide and wxPython), so why is Qt's license more scary than wxWidgets? I have an idea why you think QtCreator cannot be used with Python. If you had actually used it, you would have noticed that the XML output file can be compiled by PyQt and PySide. SharpDevelop for IronPython means you've missed Microsoft Visual Studio. Bummer. And I am not going to mention IBM's alternative to NetBeans, as I am sure you can Google it. And did you forget abpout GLADE, or do you disregard GTK (PyGTK) as a toolkit completely? Regards, Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: Wgy isn't there a good RAD Gui tool fo python
On 11 Jul, 21:58, sturlamolden wrote: > That's eight. Sorry, nine ;) -- http://mail.python.org/mailman/listinfo/python-list
Re: Wgy isn't there a good RAD Gui tool fo python
On 11 Jul, 20:28, Ivan Kljaic wrote: > To summarize it. It would be very helpfull for python to spread if > there qould be one single good rad gui builder similar to vs or > netbeAns but for python. So right now if i need to make a gui app i > need to work with an applicatio that is dicontinued for the last 5 > years is pretty buggy but is ok. http://wxformbuilder.org/ Shut up. > The ony worthly ones mentioning as an gui builder are boa constructor > fo wx, qtDesigner with the famous licence problems why companies do > not want to work with it, sharpdevelop for ironpython and netbeans for > jython. > Did you notice that 2 of these 4 are not for python? One is out of dTe > and one has a fucked up licence. Qt and PySide have LGPL license. QtCreator can be used with Python (there is a Python uic). SharpDevelop has an IronPython GUI builder. Boa Constructor is abandonware, yes. Is it just me, or did I count to three? And yes, you forgot: Visual Studio for IronPython wxGLADE for wxPython GLADE for PyGTK BlackAdder for Python and Qt SpecTcl for Tkinter That's eight. -- http://mail.python.org/mailman/listinfo/python-list
Re: Wgy isn't there a good RAD Gui tool fo python
On 11 Jul, 16:10, Thorsten Kampe wrote: > And as soon as developers start developing for Unix customers (say > Komodo, for instance), they start following the "Windows model" - as you > call it. You are probably aware that Unix and Unix customers have been around since the 1970s. I would expect the paradigm to be changed by now. S.M. -- http://mail.python.org/mailman/listinfo/python-list
Re: Wgy isn't there a good RAD Gui tool fo python
On 11 Jul, 14:39, Ben Finney wrote: > The Unix model is: a collection of general-purpose, customisable tools, > with clear standard interfaces that work together well, and are easily > replaceable without losing the benefit of all the others. This is opposed to the "Windows model" of a one-click installer for a monolithic application. Many Windows users get extremely frustrated when they have to use more than one tool. There is also a deep anxiety of using the keyboard. This means that command line tools are out of the question (everything needs a GUI). In the Windows world, even programming should be drag-and-drop with the mouse. Windows programmers will go to extreme measures to avoid typing code on their own, as tke keyboard is so scary. The most extreme case is not Visual Basic but LabView, where even business logic is drag-and-drop. A side-effect is that many Windows developers are too dumb to write code on their own, and rely on pre-coded "components" that can be dropped on a "form". A common fail-case is multiuser applications, where the developers do not understand anything about what is going on, and scalability is non-existent. Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: Wgy isn't there a good RAD Gui tool fo python
On 11 Jul, 00:50, Ivan Kljaic wrote: > Ok Guys. I know that most of us have been expiriencing the need for a > nice Gui builder tool for RAD and most of us have been googling for it > a lot of times. But seriously. Why is the not even one single RAD tool > for Python. I mean what happened to boa constructor that it stopped > developing. I simply do not see any reasons why there isn't anything. > Please help me understand it. Any insights? If you by "RAD tool" mean "GUI builder", I'd recommend wxFormBuilder for wxPython, QtCreator for PyQt or PySide, and GLADE for PyGTK. Personally I prefer wxFormBuilder and wxPython, but it's a matter of taste. Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: Wgy isn't there a good RAD Gui tool fo python
On 11 Jul, 02:43, Adam Tauno Williams wrote: > >Because RAD tools are for GUI toolkits, not for languages. If you're > >using GTK, Glade works fine. Same with QT and QTDesigner. If you're > >using WPF with IronPython, t > > These [Glade, etc...] are *NOT* RAD tools. They are GUI designers. A > RAD tool provides a GUI designer that can be bound to a backend > [typically an SQL database]. RAD = GUI + ORM. The type speciemens for "RAD tools" were Borland Delphi and Microsoft Visual Basic. They were not a combination of GUI designer and SQL/ORM backend. They were a combination of GUI designer, code editor, compiler, and debugger. Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: Python's super() considered super!
On 27 Mai, 18:06, Steven D'Aprano wrote: > Why? The fault is not that super is a function, or that you monkey- > patched it, or that you used a private function to do that monkey- > patching. The fault was that you made a common, but silly, mistake when > reasoning about type(self) inside a class. That was indeed a silly mistake, but not what I am talking about. See Stefan's reponse. Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: Python's super() considered super!
On 27 Mai, 23:49, Stefan Behnel wrote: > I think Sturla is referring to the "compile time" bit. CPython cannot know > that the builtin super() will be called at runtime, even if it sees a > "super()" function call. Yes. And opposite: CPython cannot know that builtin super() is not called, even if it does not see the name 'super'. I can easily make foo() alias super(). In both cases, the cure is a keyword -- or make sure that __class__ is always defined. If super is to be I keyword, we could argue that self and cls should be keywords as well, and methods should always be bound. That speaks in favour of a super() function. But then it should always be evaluated at run- time, without any magic from the parser. Magic functions belong in Perl, not Python. Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: Python's super() considered super!
On 27 Mai, 17:05, Duncan Booth wrote: > Oops. There's a reason why Python 2 requires you to be explicit about > the class; you simply cannot work it out automatically at run time. > Python 3 fixes this by working it out at compile time, but for Python 2 > there is no way around it. Then it should be a keyword, not a function. Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: Python's super() considered super!
On 27 Mai, 17:05, Duncan Booth wrote: > >>> class C(B): pass > >>> C().foo() > > ... infinite recursion follows ... That's true :( -- http://mail.python.org/mailman/listinfo/python-list
Re: Python's super() considered super!
On 27 Mai, 16:27, sturlamolden wrote: > Assuming that 'self' will always be named > 'self' in my code, I tend to patch __builtins__.super like this: > > import sys > def super(): > self = sys._getframe().f_back.f_locals['self'] > return __builtins__.super(type(self),self) A monkey-patch to __builtins__.super would probably also work. Assuming the first argument to the callee is 'self' or 'cls': import sys _super = __builtins__.super def monkeypatch(*args, **kwargs): if (args == ()) and (kwargs=={}): try: obj = sys._getframe().f_back.f_locals['self'] except KeyError: obj = sys._getframe().f_back.f_locals['cls'] return _super(type(obj),obj) else: return _super(*args, **kwargs) class patcher(object): def __init__(self): __builtins__.super = monkeypatch def __del__(self): __builtins__.super = _super _patch = patcher() Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: Python's super() considered super!
On 26 Mai, 18:31, Raymond Hettinger wrote: > I just posted a tutorial and how-to guide for making effective use of > super(). > > One of the reviewers, David Beazley, said, "Wow, that's really > great! I see this becoming the definitive post on the subject" > > The direct link is: > > http://rhettinger.wordpress.com/2011/05/26/super-considered-super/ I really don't like the Python 2 syntax of super, as it violates the DRY principle: Why do I need to write super(type(self),self) when super() will do? Assuming that 'self' will always be named 'self' in my code, I tend to patch __builtins__.super like this: import sys def super(): self = sys._getframe().f_back.f_locals['self'] return __builtins__.super(type(self),self) This way the nice Python 3.x syntax can be used in Python 2.x. Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: What other languages use the same data model as Python?
On May 4, 7:15 pm, Benjamin Kaplan wrote: > You missed a word in the sentence. > > "If you can see this, you DON'T have call-by-value" Indeed I did, sorry! Then we agree :) Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: Hooking into Python's memory management
On May 4, 6:51 pm, Daniel Neilson wrote: > In either case, if such a module is possible, any pointers you could > provide regarding how to implement such a module would be appreciated. The gc module will hook into the garbage collector. The del statement will remove an object from the current scope. (Delete the variable name and decrement the reference count.) Python (CPython that is) is reference counted. When the refcount drops to zero, the object is immediately garbage collected. Python is not like Java, where this happen in bouts. The __del__ method is executed deterministically, it's not like a finalizer in Java or C#. Only dead objects involved in reference circles may linger until they are spotted by the GC. And they may not have a __del__ method, or else the GC will ignore them. In fact, if you don't create circular references, the GC can safely be turned off. If you want volatile references, Python allows weak references as well. Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: What other languages use the same data model as Python?
On May 4, 5:40 pm, Michael Torrie wrote: > Which is exactly what the code showed. The first one isn't a mistake. > You just read it wrong. No, I read "call-by-value" but it does not make a copy. Call-by-value dictates a deep copy or copy-on-write. Python does neither. Python pass a handle to the object, not a handle to a copy of the object. If you want to see call-by-value in practice, take a look at MATLAB, SciLab or Octave; or consider what C++ copy constructors do in function calls with value types. The first one is indeed a mistake. An object has a value. A name binds to an object, not to a value. If Python did pass-by-value, the string would be inserted in an object (here: a list) with the same value (e.g. empty list), it would not modify the same object by which you called the function. I think you understand what Python does, but not what call-by-value would do. C++ tells you the difference: // copy constructor is invoked // x is a copy of the argument's value // this is call-by-value void foobar1(Object x); // no copy is taken // x is a logical alias of the argument // this is call-by-reference void foobar2(Object &x); // x is a pointer, not an object // x is a copy of another pointer // this is similar to Python sematics // the pointer is passed by value, not the pointee // in C, this is sometimes called call-by-reference // as there are no reference types, but it's not void foobar3(Object *x); Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: What other languages use the same data model as Python?
On May 3, 6:33 pm, Mel wrote: > def identify_call (a_list): > a_list[0] = "If you can see this, you don't have call-by-value" > a_list = ["If you can see this, you have call-by-reference"] The first one is a mistake. If it were pass-by-value, it would assign the string to a list unseen by the caller -- i.e. a copy of the caller's argument (same value, different object). But that does not happen. The string is assigned to the list seen by the caller. Thus we can exclude call-by-value. The second proposition is correct. This allows us to exclude pass-by-reference similar to C++, Pascal and Fortran. Thus: def identify_call (a_list): a_list[0] = "If you cannot see this, you have call-by-value" a_list = ["If you can see this, you have call-by-reference"] Clearly Python has neither call-by-value nor call-by-reference. Python uses a third mechanism. Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: What other languages use the same data model as Python?
On May 3, 3:50 pm, Hrvoje Niksic wrote: > I would say that, considering currently most popular languages and > platforms, Python's data model is in the majority. It is only the > people coming from a C++ background that tend to be confused by it. In C++, one will ususally put class variables (objects) on the stack or in STL containers, and use references instead of pointers. This way one gets deterministic clean-up, and operator overloading will work as expected. It is a common beginner mistake for people coming from Java to use "new" anywhere in C++ code, instead of inside constructors only. Used properly, C++ has a data model for class variables similar to Fortran (pass-by-reference). This is complicated by the ability of C ++ to pass-by-value for backwards compatibility with C, inclusing the use of raw pointers. This hybrid and convoluted data model of C++ is a common source of confusion and programming mistakes, both for programmers coming from C++ to Python or C# or vice versa. Java is somewhat between C and C#, in that it has C semantics for elementary types (e.g. int and float), but not for objects in general. In C# and Python, elementary types are immutable objects, byut thet have no special pass-by-value semantics. Python has the same data model as Scheme. This includes that code is an object in Python (in Python that is byte code not source code, thus no Lisp macros). Variables are names that bind to an object. Objects are passed as references, but names are not. "Dummy arguments" (to use Fortran termininology) are bound to the same objects with which the function was called, but this is not call-by-reference semantics in the style of Fortran and C++: In Python, the "=" operator will rebind in the local scope, as in C, Java and C#. It will not affect anything the the calling scope (as in C ++ and Fortran). Nevertheless, this is not pass-by-value, as no copy are made. A reference in C++ and Fortran is an alias for the variable in the calling scope. In Python it is a new variable pointing to the same value. This is a major difference, but a common source of error for those that don't understand it. ( for those confused about the claimed behavior of "=" in C++: The previous paragraph deals with reference variables in C++, not those passed with C-style pass-by-value sematics. C++ does not always behave as C, sometimes it behaves like Fortran and Pascal.) Thus, Python does not pass-by-value like C or C++, nor does it pass-by- reference like C++ or Fortran. The semantics of Python, C# and Lisp might be described as "pass-by- handle" if we need to put a name on it. -- http://mail.python.org/mailman/listinfo/python-list
Re: Vectors
On Apr 23, 2:26 pm, Algis Kabaila wrote: > I do understand that many people prefer Win32 and > appreciate their right to use what they want. I just am at a > loss to understand *why* ... For the same reason some people prefered OS/2 or DEC to SunOS or BSD. For the same reason some people prefer Perl or Java to Python. Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: Vectors
On Apr 23, 2:32 am, Algis Kabaila wrote: > Thanks for that. Last time I looked at numpy (for Python3) it > was available in source only. I know, real men do compile, but > I am an old man... I will compile if it is unavoidable, but in > case of numpy it does not seem a simple matter. Am I badly > mistaken? There is a Win32 binary for Python 3.1: http://sourceforge.net/projects/numpy/files/NumPy/1.5.1/ I have not tried to compile NumPy as I use Enthought to avoid such headaches. I value my own time enough to pay for a subscription ;-) http://enthought.com/ Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: Vectors
On Apr 20, 9:47 am, Algis Kabaila wrote: > Are there any modules for vector algebra (three dimensional > vectors, vector addition, subtraction, multiplication [scalar > and vector]. Could you give me a reference to such module? NumPy Or one of these libraries (ctypes or Cython): BLAS (Intel MKL, ACML, ACML-GPU, GotoBLAS2, or ATLAS) Intel VML ACML-VM -- http://mail.python.org/mailman/listinfo/python-list
Re: About threads in python
On Apr 21, 3:19 pm, dutche wrote: > My question is about the efficiency of threads in python, does anybody > has something to share? Never mind all the FUD about the GIL. Most of it is ill-informed and plain wrong. The GIL prevents you from doing one thing, which is parallel compute-bound code in plain Python. But that is close to idiotic use of Python threads anyway. I would seriously question the competance of anyone attempting this, regardless of the GIL. It is optimising computational code in the wrong end. To optimise computational code, notice that Python itself gives you a 200x performance penalty. That is much more important than not using all 4 cores on a quadcore processor. In this case, start by identifying bottlenecks using the profiler. Then apply C libraries or these or rewrite to Cython. If that is not sufficient, you can start to think about using more hardware (e.g. multithreading in C or Cython). This advice only applies to computational code though. Most usecases for Python will be i/o bound (file i/o, GUI, database, webserver, internet client), for which the GIL is not an issue at all. Python threads will almost always do what you expect. Try your code first -- if they don't scale, then ask this question. Usually the problem will be in your own code, and have nothing to do with the GIL. This is almost certainly the case if an i/o bound server do not scale, as i/o bound Python code are (almost) never affected by the GIL. In cases where a multi-threaded i/o solution do not scale, you likely want to use asynchronous design instead, as the problem can be the use of threads per se. See if Twisted fits your need. Scalability problems for i/o bound server might also be external to Python. For example it could be a database server and not the use of Python threads. For example switching from Sqlite to Microsoft SQL server will have impact on the way your program behaves under concurrent load: Sqlite is is faster per query, but has a global lock. If you want concurrent access to a database, a global lock in the database is a much more important issue than a global lock in the Python interpreter. If you want a fast response, the sluggishness of the database can be more important than the synchronization of the Python code. In the rare event that the GIL is an issue, there are still things you can do: You can always use processes instead of threads (i.e. multiprocessing.Process instead of threading.Thread). Since the API is similar, writing threaded code is never a waste of effort. There are also Python implementations that don't have a GIL (PyPy, Jython, IronPython). You can just swap interpreter and see if it scales better. Testing with another interpreter or multiprocessing is a good litmus test to see if the problem is in your own code. Cython and Pyrex are compilers for a special CPython extension module language. They give you full control over the GIL, as well as the speed of C when you need it. They can make Python threads perform as good as threads in C for computational code -- I have e.g. compared with OpenMP to confirm for myself, and there is really no difference. You thus have multiple options if the GIL gets in your way, without major rewrite, including: - Interpreter without a GIL (PyPy, Jython, IronPython) - multiprocessing.Process instead of threading.Thread - Cython or Pyrex Things that require a little bit more effort include: - Use a multi-threaded C library for your task. - ctypes.CDLL - OpenMP in C/C++, call with ctypes or Cython - Outproc COM+ server + pywin32 - Fortran + f2py - rewrite to use os.fork (except Windows) IMHO: The biggest problems with the GIL is not the GIL, but bullshit FUD and C libraries that don't release the GIL as often as they should. NumPy ans SciPy is notorius cases of the latter, and there are similar cases in the standard library as well. If I should give an advice it would be to just try Python threads and see for your self. Usually they will do what you expect. You only have a problem if they do not. In that case there are plenty of things that can be done, most of them with very little effort. Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: An unusual question...
On Apr 17, 7:25 pm, Chris Angelico wrote: > It sounds to me like you're trying to pull off a classic buffer > overrun and remote code execution exploit, in someone else's Python > program. And all I have to say is Good luck to you. He might. But this also has reputable use, such as implementing a JIT compiler. E.g. this is what Psyco and PyPy does. Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: An unusual question...
On Apr 17, 2:15 pm, wrote: > I can also find out where it is EXACTLY just as > easily so this is not my problem. > > The problem is calling it! You'll need to mmap or valloc a page-alligned memory buffer (for which the size must be a multiple of the system page size), and call mprotect to make it executable. Copy your binary code into this buffer. Then you will need to do some magic with ctypes, Cython or C to call it; i.e. cast or memcpy the address of the excutable buffer into a function pointer, and dereference/call the function pointer. If that sounds gibberish, see Steven's comment about heart transplants. Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: Questions about GIL and web services from a n00b
On Apr 17, 5:15 pm, Ian wrote: > > 5) Even in CPython, I/O-bound processes are not slowed significantly > > by the GIL. It's really CPU-bound processes that are. > > Its ONLY when you have two or more CPU bound threads that you may have > issues. And when you have a CPU bound thread, it's time to find the offending bottleneck and analyse the issue. Often it will be a bad choise of algorithm, for example one that is O(N**2) instead of O(N log N). If that is the case, it is time to recode. If algoritmic complexity is not the problem, it is time to remember that Python gives us a 200x speed penalty over C. Moving the offending code to a C library might give a sufficent speed boost. If even that does not help, we could pick a library that uses multi-threading internally, or we could release the GIL and use multiple threads from Python. And if the library is not thread-safe, it is time to use multiprocessing. Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: Questions about GIL and web services from a n00b
On Apr 15, 6:33 pm, Chris H wrote: > 1. Are you sure you want to use python because threading is not good due > to the Global Lock (GIL)? Is this really an issue for multi-threaded > web services as seems to be indicated by the articles from a Google > search? If not, how do you avoid this issue in a multi-threaded process > to take advantage of all the CPU cores available? By the way: The main issue with the GIL is all the FUD written by people who don't properly understand the issue. It's not easy to discern the real information from the unqualified FUD. I whish people who think in Java would just shut up about things they don't understand. The problem is they think they understand more than they do. Also, people that write multi-threaded programs which fails to scale from false-sharing issues, really should not pollute the web with FUD about Python's GIL. Java's "free threading" model is not better when all you use it for is to create dirty cache lines that must be reloaded everywhere. Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: Questions about GIL and web services from a n00b
On Apr 17, 12:10 am, Michael Torrie wrote: > Many GUI toolkits are single-threaded. And in fact with GTK and MFC you > can't (or shouldn't) call GUI calls from a thread other than the main > GUI thread. Most of them (if not all?) have a single GUI thread, and a mechanism by which to synchronize with the GUI thread. -- http://mail.python.org/mailman/listinfo/python-list
Re: Questions about GIL and web services from a n00b
On Apr 16, 4:59 am, David Cournapeau wrote: > My experience is that if you are CPU bound, asynchronous programming > in python can be more a curse than a blessing, mostly because the > need to insert "scheduling points" at the right points to avoid > blocking and because profiling becomes that much harder in something > like twisted. I think Raymond's argument was that multi-threaded server design does not scale well in any language. There is a reason that Windows I/O completion ports use a pool of worker threads, and not one thread per asynchronous I/O request. A multi-threaded design for a webservice will hit the wall from inscalability long before CPU saturation becomes an issue. -- http://mail.python.org/mailman/listinfo/python-list
Re: Questions about GIL and web services from a n00b
On Apr 15, 6:33 pm, Chris H wrote: > 1. Are you sure you want to use python because threading is not good due > to the Global Lock (GIL)? Is this really an issue for multi-threaded > web services as seems to be indicated by the articles from a Google > search? If not, how do you avoid this issue in a multi-threaded process > to take advantage of all the CPU cores available? First, if you are stupid enough to to compute-bound work in Python, without using a library, you have worse problems than the GIL. How incompetent would you need to be to write multi-threaded matrix multiplication or FFTs in pure Python, and blame the GIL for lack for performance? Second, if you think "advantage of all the CPU cores available" will make difference for an I/O bound webservice, you're living in cloud cookoo land. How on earth will multiple CPU cores give you or your clients a faster network connection? The network connection is likely to saturate long before you're burning the CPU. Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: Free software versus software idea patents (was: Python benefits over Cobra)
On 7 apr, 09:39, Steven D'Aprano wrote: > It's astonishing how anti-Mono FUD just won't die. (Something can be > true, and still FUD. "Oh no, people might *choke* on a peanut, or have an > allergic reaction, we must label every piece of food May Contain Nuts > just in case, because you never know!!!") > Look, patent threats are real, but we don't gain anything by exaggerating > the threat, and we *especially* don't gain anything by treating one patent > holder as the Devil Incarnate while ignoring threats from others. Regardless of source, the obvious conclusion from the patent FUD is this: - Python might infinge on a patent, never use Python. - Mono might infinge on a patent, never use Mono. - Java might infinge on a patent, never use Java. - .NET might infinge on a patent, never use .NET. - The C compiler might infinge on a patent, never use C. - Your code might infinge on a patent, never program anything. - Software on your computer might infinge on a patent, never dear to use it. - Hardware on your computer might infinge on a patent, turn it off. - Risk of litigation is a greater concern than anything a computer can do for you. - Get rid of your computer before you get sued. - A slide ruler is a hell of an invention. -- http://mail.python.org/mailman/listinfo/python-list
Re: Multiprocessing, shared memory vs. pickled copies
On 11 apr, 21:11, sturlamolden wrote: > import numpy as np > import sharedmem as sm > private_array = np.zeros((10,10)) > shared_array = sm.zeros((10,10)) I am also using this to implement synchronization primitives (barrier, lock, semaphore, event) that can be sent over an instance of multiprocessing.Queue. The synchronization primitives in multiprocessing cannot be communicated this way. Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: Multiprocessing, shared memory vs. pickled copies
On 11 apr, 09:21, John Nagle wrote: > Because nobody can fix the Global Interpreter Lock problem in CPython. > > The multiprocessing module is a hack to get around the fact > that Python threads don't run concurrently, and thus, threaded > programs don't effectively use multi-core CPUs. Then why not let NumPy use shared memory in this case? It is as simple as this: import numpy as np import sharedmem as sm private_array = np.zeros((10,10)) shared_array = sm.zeros((10,10)) I.e. the code is identical (more or less). Also, do you think shared memory has other uses besides parallel computing, such as IPC? It is e.g. an easy way to share data between Python and an external program. Do you e.g. need to share (not send) large amounts of data between Python and Java or Matlab? The difference here from multiprocessing.Array is that IPC with multiprocessing.Queue will actually work, because I am not using anonymous shared memory. Instead of inherting handles, the segment has a name in the file system, which means it can be memory mapped from anywhere. I posted the code to the NumPy mailing list yesterday. Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: Multiprocessing, shared memory vs. pickled copies
On 8 apr, 03:10, sturlamolden wrote: > That was easy, 64-bit support for Windows is done :-) > > Now I'll just have to fix the Linux code, and figure out what to do > with os._exit preventing clean-up on exit... :-( Now it feel dumb, it's not worse than monkey patching os._exit, which I should have realised since the machinery depends on monkey patching NumPy. I must have forgotten we're working with Python here, or I have been thinking too complex. Ok, 64-bit support for Linux is done too, and the memory leak is gone :-) Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: Multiprocessing, shared memory vs. pickled copies
On 10 apr, 18:27, John Nagle wrote: > Unless you have a performance problem, don't bother with shared > memory. > > If you have a performance problem, Python is probably the wrong > tool for the job anyway. Then why does Python have a multiprocessing module? In my opinion, if Python has a multiprocessing module in the standard library, it should also be possible to use it with NumPy. Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: Multiprocessing, shared memory vs. pickled copies
On 9 apr, 22:18, John Ladasky wrote: > So, there are limited advantages to trying to parallelize the > evaluation of ONE cascade network's weights against ONE input vector. > However, evaluating several copies of one cascade network's output, > against several different test inputs simultaneously, should scale up > nicely. You get an 1 by n Jacobian for the errors, for which a parallel vector math library could help. You will also need to train the input layer (errors with an m by n Jacobian), with QuickProp or LM, to which an optimized LAPACK can give you a parallel QR or SVD. So all the inherent parallelism in this case can be solved by libraries. > Well, I thought that NUMPY was that fast library... NumPy is a convinient container (data buffer object), and often an acceptaby fast library (comparable to Matlab). But comparted to raw C or Fortran it can be slow (NumPy is memory bound, not compute bound), and compared to many 'performance libraires' NumPy code will run like turtle. > My single-CPU neural net training program had two threads, one for the > GUI and one for the neural network computations. Correct me if I'm > wrong here, but -- since the two threads share a single Python > interpreter, this means that only a single CPU is used, right? I'm > looking at multiprocessing for this reason. Only access to the Python interpreter i serialised. It is a common misunderstanding that everything is serialized. Two important points: 1. The functions you call can spawn threads in C or Fortran. These threads can run feely even though one of them is holding the GIL. This is e.g. what happens when you use OpenMP with Fortran or call a performance library. This is also how Twisted can implement asynchronous i/o on Windows using i/o completion ports (Windows will set up a pool of background threads that run feely.) 2. The GIL can be released while calling into C or Fortran, and reacquired afterwards, so multiple Python threads can execute in parallel. If you call a function you know to be re-entrant, you can release the GIL when calling it. This is also how Python threads can be used to implement non-blocking i/o with functions like file.read (they release the GIL before blocking on i/o). The extent to which these two situations can happen depends on the libraries you use, and how you decide to call them. Also note that NumPy/SciPy can be compiled againt different BLAS/ LAPACK libraries, some of which will use multithreading internally. Note that NumPy and SciPy does not release the GIL as often as it could, which is why I often prefer to use libraries like LAPACK directly. In your setup you should relese the GIL around computations to prevent the GUI from freezing. Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: Multiprocessing, shared memory vs. pickled copies
On 9 apr, 09:36, John Ladasky wrote: > Thanks for finding my discussion! Yes, it's about passing numpy > arrays to multiple processors. I'll accomplish that any way that I > can. My preferred ways of doing this are: 1. Most cases for parallel processing are covered by libraries, even for neural nets. This particularly involves linear algebra solvers and FFTs, or calling certain expensive functions (sin, cos, exp) over and over again. The solution here is optimised LAPACK and BLAS (Intel MKL, AMD ACML, GotoBLAS, ATLAS, Cray libsci), optimised FFTs (FFTW, Intel MKL, ACML), and fast vector math libraries (Intel VML, ACML). For example, if you want to make multiple calls to the function "exp", there is a good chance you want to use a vector math library. Despite of this, most Python programmers' instinct seems to be to use multiple processes with numpy.exp or math.exp, or use multiple threads in C with exp from libm (cf. math.h). Why go through this pain when a single function call to Intel VML or AMD ACML (acml-vm) will be much better? It is common to see scholars argue that "yes but my needs are so special that I need to customise everything myself." Usually this translates to "I don't know these libraries (not even that they exist) and are happy to reinvent the wheel." Thus, if you think you need to use manually managed threads or processes for parallel technical computing, and even contemplate that the GIL might get in your way, there is a 99% chance you are wrong. You will almost ALWAYS want ot use a fast library, either directly in Python or linked to your own serial C or Fortran code. You have probably heard that "premature optimisation is the root of all evil in computer programming." It particularly applies here. Learn to use the available performance libraires, and it does not matter from which language they are called (C or Fortran will not be faster than Python!) This is one of the major reasons Python can be used for HPC (high-performance computing) even though the Python part itself is "slow". Most of these libraires are available for free (GotoBLAS, ATLAS, FFTW, ACML), but Intel MKL and VML require a license fee. Also note that there are comprehensive numerical libraries you can use, which can be linked with multi-threaded performance libraries under the hood. Of particular interest are the Fortran libraries from NAG and IMSL, which are the two gold standards of technical computing. Also note that the linear algebra solvers of NumPy and SciPy in Enthought Python Distribution are linked with Intel MKL. Enthought's license are cheaper than Intel's, and you don't need to build NumPy or SciPy against MKL yourself. Using scipy.linalg from EPD is likely to cover your need for parallel computing with neural nets. 2. Use Cython, threading.Thread, and release the GIL. Perhaps we should have a cookbook example in scipy.org on this. In the "nogil" block you can call a library or do certain things that Cython allows without holding the GIL. 3. Use C, C++ or Fortran with OpenMP, and call these using Cython, ctypes or f2py. (I prefer Cython and Fortran 95, but any combination will do.) 4. Use mpi4py with any MPI implementation. Note that 1-4 are not mutually exclusive, you can always use a certain combination. > I will retain a copy of YOUR shmarray code (not the Bitbucket code) > for some time in the future. I anticipate that my arrays might get > really large, and then copying them might not be practical in terms of > time and memory usage. The expensive overhead in passing a NymPy array to multiprocessing.Queue is related to pickle/cPickle, not IPC or making a copy of the buffer. For any NumPy array you can afford to copy in terms of memory, just work with copies. The shared memory arrays I made are only useful for large arrays. They are just as expensive to pickle in terms of time, but can be inexpensive in terms of memory. Also beware that the buffer is not copied to the pickle, so you need to call .copy() to pickle the contents of the buffer. But again, I'd urge you to consider a library or threads (threading.Thread in Cython or OpenMP) before you consider multiple processes. The reason I have not updated the sharedmem arrays for two years is that I have come to the conclusion that there are better ways to do this (paricularly vendor tuned libraries). But since they are mostly useful with 64-bit (i.e. large arrays), I'll post an update soon. If you decide to use a multithreaded solution (or shared memory as IPC), beware of "false sharing". If multiple processors write to the same cache line (they can be up to 512K depending on hardware), you'll create an invisible "GIL" that will kill any scalability. That is because dirty cache lines need to be synchonized with RAM. "False sharing" is one of the major reasons that "home-brewed" compute- intensive code will not scale. It is not uncommon to see Java programmers complain about Python's GIL, and then they go on to write i/o bound
Re: Multiprocessing, shared memory vs. pickled copies
On 8 apr, 02:38, sturlamolden wrote: > I should probably fix it for 64-bit now. Just recompiliong with 64-bit > integers will not work, because I intentionally hardcoded the higher > 32 bits to 0. That was easy, 64-bit support for Windows is done :-) Now I'll just have to fix the Linux code, and figure out what to do with os._exit preventing clean-up on exit... :-( Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: Multiprocessing, shared memory vs. pickled copies
On 8 apr, 02:03, sturlamolden wrote: > http://folk.uio.no/sturlamo/python/sharedmem-feb13-2009.zip > Known issues/bugs: 64-bit support is lacking, and os._exit in > multiprocessing causes a memory leak on Linux. I should probably fix it for 64-bit now. Just recompiliong with 64-bit integers will not work, because I intentionally hardcoded the higher 32 bits to 0. It doesn't help to represent the lower 32 bits with a 64 bit integer (which it seems someone actually have tried :-D) The memory leak on Linux is pesky. os._exit prevents clean-up code from executing, but unlike Windows the Linux kernel does no reference counting. I am worried we actually need to make a small kernel driver to make this work properly for Linux, since os._exit makes the current user-space reference counting fail on child process exit. Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: Multiprocessing, shared memory vs. pickled copies
On 4 apr, 22:20, John Ladasky wrote: > https://bitbucket.org/cleemesser/numpy-sharedmem/src/3fa526d11578/shm... > > I've added a few lines to this code which allows subclassing the > shared memory array, which I need (because my neural net objects are > more than just the array, they also contain meta-data). But I've run > into some trouble doing the actual sharing part. The shmarray class > CANNOT be pickled. That is hilarious :-) I see that the bitbucket page has my and Gaëls name on it, but the code is changed and broken beyond repair! I don't want to be associated with that crap! Their "shmarray.py" will not work -- ever. It fails in two ways: 1. multiprocessing.Array cannot be pickled (as you noticed). It is shared by handle inheritance. Thus we (that is Gaël and I) made a shmem buffer object that could be pickled by giving it a name in the file system, instead of sharing it anonymously by inheriting the handle. Obviously those behind the bitbucket page don't understand the difference between named and anonymous shared memory (that is, System V IPC and BSD mmap, respectively.) 2. By subclassing numpy.ndarray a pickle dump would encode a copy of the buffer. But that is what we want to avoid! We want to share the buffer itself, not make a copy of it! So we changed how numpy pickles arrays pointing to shared memory, instead of subclassing ndarray. I did that by slightly modifying some code written by Robert Kern. http://folk.uio.no/sturlamo/python/sharedmem-feb13-2009.zip Known issues/bugs: 64-bit support is lacking, and os._exit in multiprocessing causes a memory leak on Linux. Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: Multiprocessing, shared memory vs. pickled copies
On 5 apr, 02:05, Robert Kern wrote: > PicklingError: Can't pickle 'multiprocessing.sharedctypes.c_double_Array_10'>: attribute lookup > multiprocessing.sharedctypes.c_double_Array_10 failed Hehe :D That is why programmers should not mess with code they don't understand! Gaël and I wrote shmem to avoid multiprocessing.sharedctypes, because they cannot be pickled (they are shared by handle inheritance)! To do this we used raw Windows API and Unix System V IPC instead of multiprocessing.Array, and the buffer is pickled by giving it a name in the file system. Please be informed that the code on bitbucked has been "fixed" by someone who don't understand my code. "If it ain't broke don't fix it." http://folk.uio.no/sturlamo/python/sharedmem-feb13-2009.zip Known issues/bugs: 64-bit support is lacking, and os._exit in multiprocessing causes a memory leak on Linux. > Maybe. If the __reduce_ex__() method is implemented properly (and > multiprocessing bugs aren't getting in the way), you ought to be able to pass > them to a Pool just fine. You just need to make sure that the shared arrays > are > allocated before the Pool is started. And this only works on UNIX machines. > The > shared memory objects that shmarray uses can only be inherited. I believe > that's > what Sturla was getting at. It's a C extension that gives a buffer to NumPy. Then YOU changed how NumPy pickles arrays referencing these buffers, using pickle.copy_reg :) Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: Extending dict (dict's) to allow for multidimensional dictionary
On 7 Mar, 09:30, Chris Rebert wrote: > You see a tree, I see a database > (http://docs.python.org/library/sqlite3.html): > > +--+-+-+---+ > | Manufacturer | Model | MPG | Price | > +--+-+-+---+ > | Ford | Taurus | ... | $... | > | Toyota | Corolla | ... | $... | > +--+-+-+---+ If a tree of dictionaries becomes a database, then the Python interpreter itself becomes a database. Maybe we can implement Python on top of sqlite or Oracle? ;-) -- http://mail.python.org/mailman/listinfo/python-list
Re: Python fails on math
On 22 Feb, 14:20, christian schulze wrote: > Hey guys, > > I just found out, how much Python fails on simple math. I checked a > simple equation for a friend. Python does not fail. Floating point arithmetics and numerical approximations will do this. If you need symbolic maths, consider using the sympy package. If you want to prove it to yourself, try the same thing numerically with Matlab first, and then symbolically with Maple or Mathematica. Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: logging module -- better timestamp accuracy on Windows
On 16 Feb, 15:30, benhoyt wrote: > It seems to me that the logging module should use a millisecond-accurate > timestamp (time.clock) on Windows, just like the "timeit" module does. AFAIK, the Windows performance counter has long-term accuracy issues, so neither is perfect. Preferably we should have a timer with the long- term accuracy of time.time and the short-term accuracy of time.clock. Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: Non-linear regression help in Python
On 15 Feb, 05:24, Akand Islam wrote: > Dear Sturlamolden, > Thanks for reply. I will follow-up if I need further assistance. > > -- Akand You should rather use the SciPy user mailing list than comp.lang.python for this. Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: Non-linear regression help in Python
On 14 Feb, 22:02, Akand Islam wrote: > Hello all, > I want to do non-linear regression by fitting the model (let say, y = > a1*x+b1*x**2+c1*x**3/exp(d1*x**4)) where the parameter (say "c1") must > be in between (0 to 1), and others can be of any values. How can I > perform my job in python? First, install NumPy and SciPy! scipy.optimize.leastsq implements Levenberg-Marquardt, which is close to the 'gold standard' for non-linear least squares. The algorithm will be a bit faster and more accurate if you provide the Jacobian of the residuals, i.e. the partial derivative of r = y - a1*x+b1*x**2+c1*x**3/exp(d1*x**4) with respect to a1, b1, c1, and d1 (stored in a 4 by n matrix). If you have prior information about c1, you have a Bayesian regression problem. You can constrain c1 between 1 and 0 by assuming a beta prior distribution on c1, e.g. c1 ~ Be(z,z) with 1 < z < 2 Then proceed as you would with Bayesian regession -- i.e. EM, Gibbs' sampler, or Metropolis-Hastings. Use scipy.stats.beta to evaluate and numpy.random.beta to sample the beta distribution. The problem is not programming it in Python, but getting the correct equations on paper. Also beware that running the Markov chain Monte Carlo might take a while. Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: EPD 7.0 released
On 14 Feb, 13:35, "Colin J. Williams" wrote: > The purchase price for what, until now, has been open source and free > seems high. The price is not high compared to other tools scientists are using, e.g. Matlab and S-PLUS. If you consider to buy an MKL license from Intel only to build NumPy and SciPy against "Intel Math Kernel Library" (MKL), EPD is a less expensive option. You can get NumPy and SciPy built against MKL from Enthought for less than the price of an MKL license -- MKL is $399, EPD is $199. And on top of being cheaper, it saves us all the work building and installing. How much do you value your own time? Is the price high compared to the time spent "doing it yourself"? How long does it take to configure and build ATLAS on Windows, build NumPy and SciPy against ATLAS, and then build Matplotlib against the ATLAS dependent NumPy? Have you seen the number of posts on NumPy and SciPy mailing lists from people going insane trying to build the libraries? Do you still think EPD is expensive? The libraries in EPS (except MKL) is still open source and free if you want to mess with 100s of installers and/or build scripts. Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: EPD 7.0 released
On 14 Feb, 01:50, Robert Kern wrote: > I'd just like to jump in here to clear up this last statement as an Enthought > employee. While Enthought and its employees do contribute to the development > of > numpy and scipy in various ways (and paying us money is a great way to let us > do > more of it!), there is no direct relationship to the revenue we get from EPD > subscriptions and our contributions to numpy and scipy. But you do host the website, and several key NumPy and SciPy developers work for you. And NumPy and SciPy would not have reached the current maturity without Enthought. I know that you have commercial insterests in the current restructuring of NumPy (such as making it available for .NET), but it does help the development of NumPy as well. Enthought EPD also helps NumPy/SciPy indirectly, by making Python a viable alternative to Matlab: * Just having one big installer instead of 100 is why I'm allowed to use Python instead of Matlab. Others might have to use my programs, so the runtime cannot take a man year to install. * A myriad of installers is a big deterrent for any scientist considering to use Python. * Intel MKL instead of reference LAPACK (actually lapack_lite) make EPD very fast for matrix computations. * It has a 64-bit version (as opposed to only 32-bit in the "official" SciPy installer; that might have changed now.) * We don't have to know which libraries are important and/or spend time search for them. * It comes with C, C++ and Fortran compilers (GCC) preconfigured to work with distutils, link correctly, etc. Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: EPD 7.0 released
EPD is great, at least for scientific users. There is just one installer, with everything we need, instead of struggling with dozens of libraries to download, configure and build. It is still Python 2.7 (not 3.1) due to libraries like SciPy. A subscription for EPD is also a contribution to the development of NumPy and SciPy. Sturla On 10 Feb, 02:00, Eric Stechmann wrote: > Hi, son. > > Don't know if this would be of any interest to you. Well, I suppose it does > provide some interesting. > > I hope your physical get-together will help out. > > Love you, David. > > Dad > > On Feb 9, 2011, at 8:13 AM, Ilan Schnell wrote: > > > Hello, > > > I am pleased to announce that EPD (Enthought Python Distribution) > > version 7.0 has been released. This major release updates to > > Python 2.7, Intel Math Kernel Library 10.3.1, numpy 1.5.1, in > > addition to updates to many of the other packages included. > > Please find the complete list of additions, updates and bug fixes > > in the change log: > > > http://www.enthought.com/EPDChangelog.html > > > To find more information about EPD, as well as download a 30 day > > free trial, visit this page: > > > http://www.enthought.com/products/epd.php > > > About EPD > > - > > The Enthought Python Distribution (EPD) is a "kitchen-sink-included" > > distribution of the Python Programming Language, including over 90 > > additional tools and libraries. The EPD bundle includes NumPy, SciPy, > > IPython, 2D and 3D visualization, and many other tools. > > > http://www.enthought.com/products/epdlibraries.php > > > It is currently available as a single-click installer for Windows XP, > > Vista and 7, MacOS (10.5 and 10.6), RedHat 3, 4 and 5, as well as > > Solaris 10 (x86 and x86_64/amd64 on all platforms). > > > All versions of EPD (32 and 64-bit) are free for academic use. An > > annual subscription including installation support is available for > > individual and commercial use. Additional support options, including > > customization, bug fixes and training classes are also available: > > > http://www.enthought.com/products/support_level_table.php > > > - Ilan > > -- > >http://mail.python.org/mailman/listinfo/python-announce-list > > > Support the Python Software Foundation: > > http://www.python.org/psf/donations/ > > -- http://mail.python.org/mailman/listinfo/python-list
Re: Idea for removing the GIL...
On 8 Feb, 10:39, Vishal wrote: > Is it possible that the Python process, creates copies of the > interpreter for each thread that is launched, and some how the thread > is bound to its own interpreter ? In .NET lingo this is called an 'AppDomain'. This is also how tcl works -- one interpreter per thread. I once had a mock-up of that using ctypes and Python'c C API. However, the problem with 'app domains' is that OS handles are global to the process. To make OS handles private, the easiest solution is to use multiple processes, which incidentally is what the 'multiprocessing' modules does (or just os.fork if you are on Unix). Most people would not consider 'app domains' to be a true GIL-free Python, but rather think of free threading comparable to .NET, Java and C++. However, removing the GIL will do no good as long as CPython uses reference counting. Any access to reference counts must be atomic (e.g. requiring a mutex or spinlock). Here we can imagine using fine- grained locking instead of a global interpreter lock. There is a second problem, which might not be as obvious: In parallel computing there is something called 'false sharing', which in this case will be incurred on the reference counts. That is, any updating will dirty the cache lines everywhere; all processors must stop whatever they are doing to synchronize cache with RAM. This 'false sharing' will put the scalability down the drain. To make a GIL free Python, we must start by removing reference counting in favour of a generational garbage collector. That also comes with a cost. The interpreter will sometimes pause to collect garbage. The memory use will be larger as well, as garbage remain uncollected for a while and is not immediately reclaimed. Many rely on CPython because the interpreter does not pause and a Python process has a small fingerprint. If we change this, we have 'yet another Java'. There are already IronPython and Jython for those who want this. Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: Perl Hacker, Python Initiate
On 2 Feb, 05:36, Gary Chambers wrote: > Given the following Perl script: (...) Let me quote the deceased Norwegian lisp hacker Erik Naggum: "Excuse me while I barf in Larry Wall's general direction." Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: WxPython versus Tkinter.
On 23 Jan, 01:07, rantingrick wrote: > It is time to prove once and for all how dated and worthless Tkinter > is compared to wxPython. Yes, WxPython is not as advanced as i would > like it to be for a 21st century GUI library. So use PyQt instead. > However compared to > Tkinter, Wx is light years ahead! Wx is our best hope to move Python > into the 21st century. I vaguely someone saying that about Borland VCL and C++. Now people harly remember there was a RAD product called Borland C++ Builder, with a sane GUI API compared to Microsoft's MFC. I for one do not like to handcode GUIs. That is why I use wxFormBuilder, and the availability of a good GUI builder dictates my choise of API (currently wxPython due to the previously mentioned product). Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: Python critique
On 11 Des 2010, 00:09, Antoine Pitrou wrote: > > Probably the biggest practical problem with CPython is > > that C modules have to be closely matched to the version of > > CPython. There's no well-defined API that doesn't change. > > Please stop spreading FUD:http://docs.python.org/c-api/index.html Even if the API does not change, there is still static linkage with version dependency. That is avoided with ctypes. Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: Python critique
On 10 Des 2010, 21:02, John Nagle wrote: > Probably the biggest practical problem with CPython is > that C modules have to be closely matched to the version of > CPython. There's no well-defined API that doesn't change. ctypes and DLLs in plain C do not change, and do not depend on CPython version. Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: Why is python not written in C++ ?
On 11 Aug, 08:40, Ulrich Eckhardt wrote: > That's true, maybe I don't remember the exact rationale. Especially if even > someone like you, who is much deeper into Python development, doesn't, I'm > wondering if I'm misremembering something Header (definition) and source (implementation) is not the same. A C++ compiler can use Python's header files and link with Python's C API correctly. But it cannot compile Python's C source code. A C compiler is required to compile and build Python. -- http://mail.python.org/mailman/listinfo/python-list
Re: Python -Vs- Ruby: A regexp match to the death!
On 9 Aug, 10:21, Steven D'Aprano wrote: > And that it's quite finicky about blank lines between methods and inside > functions. Makes it hard to paste code directly into the interpreter. The combination of editor, debugger and interpreter is what I miss most from Matlab. In Matlab we can have a function or script open in an editor, and use it directly from the interpreter. No need to reimport or anything: edit and invoke. It is also possible to paste data directly from the clipboard into variables in the interpreter. ipython does not have that annoying >>> prompt. -- http://mail.python.org/mailman/listinfo/python-list
Re: Why is python not written in C++ ?
On 4 Aug, 04:41, Grant Edwards wrote: > The issue that would prevent its use where I work is the inability to > hire anybody who knows Ada. You can't hire anybody who knows C++ > either, but you can hire lots of people who claim they do. That is very true. -- http://mail.python.org/mailman/listinfo/python-list
Re: Normalizing A Vector
On 30 Jul, 13:46, Lawrence D'Oliveiro wrote: > Say a vector V is a tuple of 3 numbers, not all zero. You want to normalize > it (scale all components by the same factor) so its magnitude is 1. > > The usual way is something like this: > > L = math.sqrt(V[0] * V[0] + V[1] * V[1] + V[2] * V[2]) > V = (V[0] / L, V[1] / L, V[2] / L) > > What I don’t like is having that intermediate variable L leftover after the > computation. L = math.sqrt(V[0] * V[0] + V[1] * V[1] + V[2] * V[2]) V = (V[0] / L, V[1] / L, V[2] / L) del L But this is the kind of programming tasks where NumPy is nice: V[:] = V / np.sqrt((V**2).sum()) Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: Why is python not written in C++ ?
On 3 Aug, 04:03, sturlamolden wrote: > struct File { > std::FILE *fid; > File(const char *name) { > // acquire resource in constructor > fid = std::fopen(name); > if (!fid) throw some_exception; > } > ~File() { > // free resource in destructor > if(fid) std::flose(fid); > } > > }; And since this is comp.lang.python, I'll add in that this sometimes applies to Python as well. Python, like C++, can have the call stack rewinded by an exception. If we call some raw OS resource allocation, e.g. malloc or fopen using ctypes, we have to place a deallocation in __del__ (and make sure the object is newer put in a reference cycle). The safer option, however, is to use a C extension object, which is guaranteed to have the destructor called (that is, __dealloc__ when using Cython or Pyrex). If we place raw resource allocation arbitrarily in Python code, it is just as bad as in C++. But in Python, programmers avoid this trap by mostly never allocating raw OS resources from within Python code. E.g. Python's file object is programmed to close itself on garbage collection, unlike a pointer retured from fopen using ctypes. Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: Why is python not written in C++ ?
On 3 Aug, 02:47, Roy Smith wrote: > This one I don't understand. Yes, I get RAII, but surely there are > valid reasons to allocate memory outside of constructors. Containers > which resize themselves (such as std::vector) are one obvious example. That is because an exception might skip an arbitrarily placed delete[] or std::fclose when it rewinds the stack, so to avoid resource leaks they must be places inside a destructor of an object on the stack (which will be called). This is unsafe, anyone who writes this in C++ should be flogged: void foobar() { std::FILE *fid = fopen("whatever"); // some code here std::fclose(fid); } This idiom is safer : struct File { std::FILE *fid; File(const char *name) { // acquire resource in constructor fid = std::fopen(name); if (!fid) throw some_exception; } ~File() { // free resource in destructor if(fid) std::flose(fid); } }; void foobar() { File file("whatever"); // some code here } It is for the very same reason we should use std::vector instead of new[] for arrays. It is why new and delete should only be used inside constructors/destructors. It also why C++ has references in addition to pointers. Which means this is bad in C++, as new and delete is arbitrarily placed: void foobar() { File *myfile = new File("whatever); // some code here delete myfile; } An object should go on the stack, because if an exception is thrown, we need the destructor call. Which is why this (as shown above) is ok: void foobar() { File file("whatever"); // some code here } This is the kind of gotchas that allows C++ to shoot your leg off. In comparison C is much more forgiving, because there is no exceptions (unless you use setjmp/longjmp). This is ok in C, but not in C++: void foobar() { FILE *fid = fopen("whatever"); // some code here fclose(fid); } For the same reason we can place malloc and free wherever we like in C code. But in C++ we must restrict std::malloc and std::free (as well as new and delete) to constructor and destructor pairs. This kind of design is mandatory to make safe C++ programs. But it is not enforced by the compiler. And since the majority of C++ programmers don't obey by these rules, Java and C# tends to produce far less runtime errors and memory/resource leaks. And C++ textbooks tends to avoid teaching these important details. I'm inclined to believe it is because the authors don't understand it themselves. Objects on the stack are also required for operator overloading in C+ +. A common novice mistake is this: std::vector *myvec = new std::vector(10); myvec[5] = 10.0; // why does the compiler complain? And then the novice will spend hours contemplating why the stupid compiler claims the type of myvec[5] is std::vector. There was recently a post to the Cython mailing list claiming there is a bug in Cython's auto-generated C++ because of this. But this is how it should be: std::vector myvec(10); myvec[5] = 10.0; // ok And now we see why C++ has references, as that is how we can efficiently reference and object on the stack without getting the operator overloading problem above. C++ is good, but most programmers are better off with C. It has fewer gotchas. And if we need OOP, we have Python... Sturla -- http://mail.python.org/mailman/listinfo/python-list
Re: Why is python not written in C++ ?
On 3 Aug, 01:37, Mark Lawrence wrote: > A bug is a bug is a bug? According to Grace Hopper, a bug might be a moth, in which case the best debugger is a pair of forceps. -- http://mail.python.org/mailman/listinfo/python-list
Re: Why is python not written in C++ ?
On 3 Aug, 01:14, Martin Gregorie wrote: > Bottom line: All this would still have happened regardless of the > programming language used. I am quite sure C and Fortran makes it unlikely for an unhandled exception to trigger the autodestruct sequence. But it's nice to know when flying that modern avionics is programmed in a language where this is possible. ;) -- http://mail.python.org/mailman/listinfo/python-list
Re: Why is python not written in C++ ?
On 3 Aug, 00:27, Paul Rubin wrote: > Certain folks in the functional-programming community consider OO to be > a 1980's or 1990's approach that didn't work out, and that what it was > really trying to supply was polymorphism. C++ programs these days > apparently tend to use template-based generics rather than objects and > inheritance for that purpose. It avoids virtual function calls at the expense of unreable code and errors that are nearly impossible to trace. It seems many thinks this is a good idea because Microsoft did this with ATL and WTL. There are also those who thinks template metaprogramming is a good idea. But who uses a C++ compiler to dumb to unroll a for loop? In my experience, trying to outsmart a modern compiler is almost always a bad idea. > I have the impression that Ada has an undeservedly bad rap because of > its early implementations and its origins in military bureaucracy. It is annyingly verbose, reminds me of Pascal (I hate the looks of it), and is rumoured to produce slow bloatware. And don't forget Ariane 5 ;) > I'd > certainly consider it as an alternative to C or C++ if I had to write a > big program in a traditional procedural language. I still prefer Fortran 95 :-) -- http://mail.python.org/mailman/listinfo/python-list
Re: Why is python not written in C++ ?
On 2 Aug, 05:04, Tomasz Rola wrote: > And one should not forget about performance. C++ was for a long time > behind C, and even now some parts (like iostreams) should be avoided in > fast code. For fast I/O one must use platform specific APIs, such as Windows' i/o completion ports and memory mapping. iostreams in C++ and stdio in C are ok for less demanding tasks. -- http://mail.python.org/mailman/listinfo/python-list
Re: Why is python not written in C++ ?
On 2 Aug, 01:08, candide wrote: > Has it ever been planned to rewrite in C++ the historical implementation > (of course in an object oriented design) ? OO programming is possible in C. Just take a look at GNOME and GTK. Perl is written in C++. That is not enough to make me want to use it ;) To be honest, C++ can be a great tool. But very few know how to use it correctly. C++ textbooks are also written by people who don't understand the language, and teach bad habits. The typical examples revealing incompetence are use of new[] instead of std::vector, and dynamic resourse allocation outside contructors. C++ compilers used to be bloatware generators; C++ compilers of 2010 are not comparable to those of 1990. C++ is also a PITA for portability. It is not sufficient for Python to only build with Microsoft Visual Studio. -- http://mail.python.org/mailman/listinfo/python-list
Re: Performance ordered dictionary vs normal dictionary
On 29 Jul, 03:47, Navkirat Singh wrote: > I was wondering what would be better to do some medium to heavy book keeping > in memory - Ordered Dictionary or a plain simple Dictionary object?? It depends on the problem. A dictionary is a hash table. An ordered dictionary is a binary search tree (BST). Those are different data structures. The main things to note is that: - Access is best-case O(1) for the hash table and O(log n) for the BST. - Worst case behavior is for access is O(n) for both. The pathologic conditions are excessive collisions (hash) or an unbalanced tree (BST). In pathologic cases they both converge towards a linked list. - Searches are only meaningful for == and != for a hash table, but BST searches are also meaningful for >, <, <=, and >=. For this reason, databases are often implemented as BSTs. - A BST can be more friendly to cache than a hash table, particularly when they are large. (Remember that O(1) can be slower than O(log n). Big-O only says how run-time scales with n.) That said, I have not compared ordered dicts with normal dicts, as I still use 2.6. -- http://mail.python.org/mailman/listinfo/python-list
Re: Accumulate function in python
On 19 Jul, 13:18, dhruvbird wrote: > Hello, > I have a list of integers: x = [ 0, 1, 2, 1, 1, 0, 0, 2, 3 ] > And would like to compute the cumulative sum of all the integers > from index zero into another array. So for the array above, I should > get: [ 0, 1, 3, 4, 5, 5, 5, 7, 10 ] > What is the best way (or pythonic way) to get this. At least for large arrays, this is the kind of task where NumPy will help. >>> import numpy as np >>> np.cumsum([ 0, 1, 2, 1, 1, 0, 0, 2, 3 ]) array([ 0, 1, 3, 4, 5, 5, 5, 7, 10]) -- http://mail.python.org/mailman/listinfo/python-list
Re: Exposing buffer interface for non-extension types?
On 21 Jul, 02:38, Ken Watford wrote: > Perhaps, but *why* is it only a pure C-level interface? It is exposed to Python as memoryview. If memoryview is not sufficient, we can use ctypes.pythonapi to read the C struct. -- http://mail.python.org/mailman/listinfo/python-list
Re: why is this group being spammed?
On 18 Jul, 07:01, "be.krul" wrote: > why is this group being spammed? There used to be bots that issued cancel messages against spam, but I don't think they are actively maintained anymore. -- http://mail.python.org/mailman/listinfo/python-list
Re: Sharing: member type deduction for member pointers (Alf's device?)
On 17 Jul, 15:02, "Alf P. Steinbach /Usenet" wrote: > #include // PyWeakPtr, PyPtr, PyModule, PyClass > using namespace progrock; > > namespace { > using namespace cppy; > > struct Noddy > { > PyPtr first; > PyPtr last; > int number; > > Noddy( PyWeakPtr pySelf, PyPtr args, PyPtr kwArgs ) > : number( 0 ) > { > devsupport::suppressUnusedWarning( pySelf ); > > PyWeakPtr pFirstName = 0; > PyWeakPtr pLastName = 0; > > static char* kwlist[] = { "first", "last", "number", 0 }; > > ::PyArg_ParseTupleAndKeywords( > args.rawPtr(), kwArgs.rawPtr(), "|OOi", kwlist, > pointerTo( pFirstName ), pointerTo( pLastName ), &number > ) > >> Accept< IsNonZero >() > || throwX( "Invalid args" ); > > if( pFirstName != 0 ) { first = pFirstName; } > if( pLastName != 0 ) { last = pLastName; } > } > > PyPtr name() > { > (first != 0) > || throwX( ::PyExc_AttributeError, "first" ); > (last != 0) > || throwX( ::PyExc_AttributeError, "last" ); > return (PyString( first ) + L" " + PyString( last )).pyPtr(); > } > }; > > struct NoddyPyClass: PyClass< Noddy > > { > NoddyPyClass( PyModule& m, PyString const& name, PyString const& doc > ) > : PyClass< Noddy >( m, name, doc, Exposition() > .method( > L"name", CPPY_GLUE( name ), > L"Return the name, combining the first and last name" > ) > .attribute( > L"first", CPPY_GLUE( first ), L"first name" > ) > .attribute( > L"last", CPPY_GLUE( last ), L"last name" > ) > .attribute( > L"number", CPPY_GLUE( number ), L"noddy number" > ) > ) > {} > }; > > class NoddyModule: public PyModule > { > private: > NoddyPyClass noddyPyClass_; > > public: > NoddyModule() > : PyModule( > L"noddy2", L"Example module that creates an extension type." > ) > , noddyPyClass_( *this, > L"Noddy", L"A Noddy object has a name and a noddy number" ) > {} > }; > > } // namespace > > PyMODINIT_FUNC > PyInit_noddy2() > { > return cppy::safeInit< NoddyModule >();} > I wonder if this is readable / self-documenting, or not? Are you serious? It's C++, Heaven forbid, and you wonder if it's readable or not? -- http://mail.python.org/mailman/listinfo/python-list
Re: Struqtural: High level database interface library
On 17 Jul, 07:29, Nathan Rice wrote: > Let’s push things to the edge now with a quick demo of many to many > relationship support. For this example we’re going to be using the > following XML: > > > > 123 > Sales > > 143 > Raul Lopez > > > 687 > John Smith > > > 947 > Ming Chu > > > > 456 > Marketing > > 157 > Jim Jones > > > 687 > John Smith > > > 947 > Ming Chu > > > Oh yes, I'd rather write pages of that rather than some SQL in a Python string. -- http://mail.python.org/mailman/listinfo/python-list
Re: Cpp + Python: static data dynamic initialization in *nix shared lib?
On 13 Jul, 22:35, "Alf P. Steinbach /Usenet" wrote: > Yes, I know Boost.Python in more detail and I've heard of all the rest except > SIP, In my opinion, SIP is the easiest way of integrating C++ and Python. Just ignore the PyQt stuff. http://www.riverbankcomputing.co.uk/static/Docs/sip4/using.html#a-simple-c-example http://www.riverbankcomputing.co.uk/software/sip/intro (SIP 4 can also expose C libraries to Python, not just C++.) -- http://mail.python.org/mailman/listinfo/python-list
Re: Cpp + Python: static data dynamic initialization in *nix shared lib?
On 13 Jul, 22:35, "Alf P. Steinbach /Usenet" wrote: > In practice, 'extern "C"' matters for the jump tables because for those few > compilers if any where it really matters (not just the compiler emitting a > warning like reportedly Sun CC does), different linkage can imply different > machine code level calling convention. I see. Just stick to MSVC and GNU and that never happens, just do a C style cast. > Yes, I know Boost.Python in more detail and I've heard of all the rest except > SIP, but then regarding SIP I really don't like QT You don't have to use Qt to use SIP. It's just a tool to automatically wrap C++ for Python (like Swig, except designed for C++). > And as you'd guess if you were not in silly ignoramus assertion-mode, I'm not > reinventing the wheel. It seems you are re-inventing PyCXX (or to a lesser extent Boost.Python). -- http://mail.python.org/mailman/listinfo/python-list