> IIUC, PIL and numpy don't share exactly the same data model, so you may
> have to make a memory copy to go from one to the other -- that may the
> source of your performance decrease.
>
> If you really want to know, you could profile the code doing one step at
> a time (not the mean, for instance) to see where the time is going.
Following your advice I wrote three different scripts and profiled them.
#Script 1 - indexing
for i in range(10):
imarr[:,:,0].mean()
imarr[:,:,1].mean()
imarr[:,:,2].mean()
#Script 2 - slicing
for i in range(10):
imarr[:,:,0:1].mean()
imarr[:,:,1:2].mean()
imarr[:,:,2:3].mean()
#Script 3 - reshape
for i in range(10):
imarr.reshape(-1,3).mean(axis=0)
For an RGB image 2000x2000 of ~11mb the times were:
script 1: 5.432sec
script 2: 10.234sec
script 3: 4.980sec
I tried the same without the mean(), but for 1000 loops, and the results were:
script 1: 0.463sec (~6mb of RAM)
script 2: 0.465sec (~3mb of RAM)
script 3: 0.462sec (~2mb of RAM)
Script 3, you proposed, has the best performance, while script 2 is very slow. I
can't make a conclusion, but I'll use the third approach. I'll post my results
back to the numpy list to see if they have an idea.
_______________________________________________
Image-SIG maillist - [email protected]
http://mail.python.org/mailman/listinfo/image-sig