On Wed, 2013-05-01 at 16:37 -0400, Yaroslav Halchenko wrote: > On Wed, 01 May 2013, Sebastian Berg wrote: > > > There really is no point discussing here, this has to do with numpy > > doing iteration order optimization, and you actually *want* this. Lets > > for a second assume that the old behavior was better, then the next guy > > is going to ask: "Why is np.add.reduce(array, axis=0) so much slower > > then reduce(array, np.add)?". This is huge speed improvement by Marks > > new iterator for reductions over the slow axes... > > btw -- is there something like panda's vbench for numpy? i.e. where > it would be possible to track/visualize such performance > improvements/hits? >
Sorry if it seemed harsh, but only skimmed mails and it seemed a bit like the an obvious piece was missing... There are no benchmark tests I am aware of. You can try: a = np.random.random((1000, 1000)) and then time a.sum(1) and a.sum(0), on 1.7. the fast axis (1), is only slightly faster then the sum over the slow axis. On earlier numpy versions you will probably see something like half the speed for the slow axis (only got ancient or 1.7 numpy right now, so reluctant to give exact timings). - Sebastian _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion