On 4/4/11 3:20 PM, John Ladasky wrote:
Hi folks,

I'm developing some custom neural network code.  I'm using Python 2.6,
Numpy 1.5, and Ubuntu Linux 10.10.  I have an AMD 1090T six-core CPU,
and I want to take full advantage of it.  I love to hear my CPU fan
running, and watch my results come back faster.

You will want to ask numpy questions on the numpy mailing list.

  http://www.scipy.org/Mailing_Lists

When I'm training a neural network, I pass two numpy.ndarray objects
to a function called evaluate.  One array contains the weights for the
neural network, and the other array contains the input data.  The
evaluate function returns an array of output data.

I have been playing with multiprocessing for a while now, and I have
some familiarity with Pool.  Apparently, arguments passed to a Pool
subprocess must be able to be pickled.  Pickling is still a pretty
vague progress to me, but I can see that you have to write custom
__reduce__ and __setstate__ methods for your objects.  An example of
code which creates a pickle-friendly ndarray subclass is here:

http://www.mail-archive.com/numpy-discussion@scipy.org/msg02446.html

Note that numpy arrays are already pickle-friendly. This message is telling you how, *if* you are already subclassing, how to make your subclass pickle the extra information it holds.

Now, I don't know that I actually HAVE to pass my neural network and
input data as copies -- they're both READ-ONLY objects for the
duration of an evaluate function (which can go on for quite a while).
So, I have also started to investigate shared-memory approaches.  I
don't know how a shared-memory object is referenced by a subprocess
yet, but presumably you pass a reference to the object, rather than
the whole object.   Also, it appears that subprocesses also acquire a
temporary lock over a shared memory object, and thus one process may
well spend time waiting for another (individual CPU caches may
sidestep this problem?) Anyway, an implementation of a shared-memory
ndarray is here:

https://bitbucket.org/cleemesser/numpy-sharedmem/src/3fa526d11578/shmarray.py

I've added a few lines to this code which allows subclassing the
shared memory array, which I need (because my neural net objects are
more than just the array, they also contain meta-data).

Honestly, you should avoid subclassing ndarray just to add metadata. It never works well. Make a plain class, and keep the arrays as attributes.

But I've run
into some trouble doing the actual sharing part.  The shmarray class
CANNOT be pickled.

Please never just *say* that something doesn't work. Show us what you tried, and show us exactly what output you got. I assume you tried something like this:

[Downloads]$ cat runmp.py
from multiprocessing import Pool
import shmarray


def f(z):
    return z.sum()

y = shmarray.zeros(10)
z = shmarray.ones(10)

p = Pool(2)
print p.map(f, [y, z])


And got output like this:

[Downloads]$ python runmp.py
Exception in thread Thread-2:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/7.0/lib/python2.7/threading.py", line 530, in __bootstrap_inner
    self.run()
File "/Library/Frameworks/Python.framework/Versions/7.0/lib/python2.7/threading.py", line 483, in run
    self.__target(*self.__args, **self.__kwargs)
File "/Library/Frameworks/Python.framework/Versions/7.0/lib/python2.7/multiprocessing/pool.py", line 287, in _handle_tasks
    put(task)
PicklingError: Can't pickle <class 'multiprocessing.sharedctypes.c_double_Array_10'>: attribute lookup multiprocessing.sharedctypes.c_double_Array_10 failed


Now, the sharedctypes is supposed to implement shared arrays. Underneath, they have some dynamically created types like this c_double_Array_10 type. multiprocessing has a custom pickler which has a registry of reduction functions for types that do not implement a __reduce_ex__() method. For these dynamically created types that cannot be imported from a module, this dynamic registry is the only way to do it. At least at one point, the Connection objects which communicate between processes would use this custom pickler to serialize objects to bytes to transmit them.

However, at least in Python 2.7, multiprocessing seems to have a C extension module defining the Connection objects. Unfortunately, it looks like this C extension just imports the regular pickler that is not aware of these custom types. That's why you get this error. I believe this is a bug in Python.

So what did you try, and what output did you get? What version of Python are you using?

I think that my understanding of multiprocessing
needs to evolve beyond the use of Pool, but I'm not sure yet.  This
post suggests as much.

http://mail.scipy.org/pipermail/scipy-user/2009-February/019696.html

Maybe. If the __reduce_ex__() method is implemented properly (and multiprocessing bugs aren't getting in the way), you ought to be able to pass them to a Pool just fine. You just need to make sure that the shared arrays are allocated before the Pool is started. And this only works on UNIX machines. The shared memory objects that shmarray uses can only be inherited. I believe that's what Sturla was getting at.

--
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth."
  -- Umberto Eco

--
http://mail.python.org/mailman/listinfo/python-list

Reply via email to