Hi will,Thanks for your reply. The simplified code is as follows, and you can run it if you like. It takes 7 seconds to process 1000 rows, which is tolerable, but I wonder why it takes so long, because I also did one for loop through all of the same rows without accessing array, which only takes 1 sec to process 1000 rows. Isn't vectorized operation supposed to run very quickly?
from numpy import * componentcount = 300000 currSum = zeros(componentcount) row = zeros(componentcount) #current row rowcount = 50000 for i in range(1,rowcount): row[:] = 1 currSum = currSum + row; > Array indexing is unlikely to be the culprit. Could it not just be > slow, because you are processing a lot of data? With numbers those big > I would expect to have enough time to go make a coffee, then drink it. > > If you think it is slower than it could be, post more code for > optimization advice... > > Will McGugan > -- > http://www.willmcgugan.com -- http://mail.python.org/mailman/listinfo/python-list