On 2 Okt, 20:19, Ole Streicher <ole-usenet-s...@gmx.net> wrote: > I *do* worry about speed. And I use Python. Why not? There are powerful > libraries available.
I do as well. But "powerful libraries" should release the GIL. Let me rephrase that: I am not worried about speed in the part of my code that uses Python. > Usually this is not an option: numpy is AFAIK not available for Cython, > neither is scipy (ofcourse). Anything available to Python is available to Cython. Cython even has a special syntax for working with NumPy arrays. > > Using more than one process is always an option, i.e. os.fork if you > > have it or multiprocessing if you don't. Processes don't share GIL. > > Not if the threads/processes need to share lots of data. Interprocess > communication can be very expensive -- even more if one needs to share > Python objects. I have written NumPy arrays that uses named shared memory as buffer. They are pickled by name (i.e. the buffer is not copied) and therefore very efficient when used with multiprocessing. http://folk.uio.no/sturlamo/python/sharedmem-feb13-2009.zip IPC without shared memory is generally cheap. The overhead of a pipe or a unix domain socket is little more than that of a memcpy. The expensive part is serializing and deserializing the Python object. Also consider that the majority of the world's supercomputers are programmed with MPI, which uses processes and IPC instead of threads. On clusters, threads are not even an option. On shared memory machines, MPI tends to be more efficient than threads/OpenMP (there are often issues with cache use and false sharing when using threads). S.M. -- http://mail.python.org/mailman/listinfo/python-list