and to put so far reported findings into some kind of automated form,
please welcome

http://www.onerussian.com/tmp/numpy-vbench/#benchmarks-performance-analysis

This is based on a simple 1-way anova of last 10 commits and some point
in the past where 10 other commits had smallest timing and were significantly
different from the last 10 commits.

"Possible recent" is probably too noisy and not sure if useful -- it should
point to a closest in time (to the latest commits) diff where a
significant excursion from current performance was detected.  So per se it has
nothing to do with the initial detected performance hit, but in some cases
seems still to reasonably locate commits hitting on performance.

Enjoy,

On Tue, 09 Jul 2013, Yaroslav Halchenko wrote:

> Julian Taylor contributed some benchmarks he was "concerned" about, so
> now the collection is even better.

> I will keep updating tests on the same url:
> http://www.onerussian.com/tmp/numpy-vbench/
> [it is now running and later I will upload with more commits for higher 
> temporal fidelity]

> of particular interest for you might be:
> some minor consistent recent losses in
> http://www.onerussian.com/tmp/numpy-vbench/vb_vb_io.html#strided-assign-float64
> http://www.onerussian.com/tmp/numpy-vbench/vb_vb_io.html#strided-assign-float32
> http://www.onerussian.com/tmp/numpy-vbench/vb_vb_io.html#strided-assign-int16
> http://www.onerussian.com/tmp/numpy-vbench/vb_vb_io.html#strided-assign-int8

> seems have lost more than 25% of performance throughout the timeline
> http://www.onerussian.com/tmp/numpy-vbench/vb_vb_io.html#memcpy-int8 

> "fast" calls to all/any seemed to be hurt twice in their life time now running
> *3 times slower* than in 2011 -- inflection points correspond to regressions
> and/or their fixes in those functions to bring back performance on "slow"
> cases (when array traversal is needed, e.g. on arrays of zeros for any)

> http://www.onerussian.com/tmp/numpy-vbench/vb_vb_reduce.html#numpy-all-fast
> http://www.onerussian.com/tmp/numpy-vbench/vb_vb_reduce.html#numpy-any-fast

> Enjoy

> On Mon, 01 Jul 2013, Yaroslav Halchenko wrote:

> > FWIW -- updated plots with contribution from Julian Taylor
> > http://www.onerussian.com/tmp/numpy-vbench-20130701/vb_vb_indexing.html#mmap-slicing
> > ;-)

> > On Mon, 01 Jul 2013, Yaroslav Halchenko wrote:

> > > Hi Guys,

> > > not quite the recommendations you expressed,  but here is my ugly
> > > attempt to improve benchmarks coverage:

> > > http://www.onerussian.com/tmp/numpy-vbench-20130701/index.html

> > > initially I also ran those ufunc benchmarks per each dtype separately,
> > > but then resulting webpage is loong which brings my laptop on its knees
> > > by firefox.  So I commented those out for now, and left only "summary"
> > > ones across multiple datatypes.

> > > There is a bug in sphinx which forbids embedding some figures for
> > > vb_random "as is", so pardon that for now...

> > > I have not set cpu affinity of the process (but ran it at nice -10), so  
> > > may be
> > > that also contributed to variance of benchmark estimates.  And there 
> > > probably
> > > could be more of goodies (e.g. gc control etc) to borrow from
> > > https://github.com/pydata/pandas/blob/master/vb_suite/test_perf.py which 
> > > I have
> > > just discovered to minimize variance.

> > > nothing really interesting was pin-pointed so far, besides that 

> > > - svd became a bit faster since few months back ;-)

> > > http://www.onerussian.com/tmp/numpy-vbench-20130701/vb_vb_linalg.html

> > > - isnan (and isinf, isfinite) got improved

> > > http://www.onerussian.com/tmp/numpy-vbench-20130701/vb_vb_ufunc.html#numpy-isnan-a-10types

> > > - right_shift got a miniscule slowdown from what it used to be?

> > > http://www.onerussian.com/tmp/numpy-vbench-20130701/vb_vb_ufunc.html#numpy-right-shift-a-a-3types

> > > As before -- current code of those benchmarks collection is available
> > > at http://github.com/yarikoptic/numpy-vbench/pull/new/master

> > > if you have specific snippets you would like to benchmark -- just state 
> > > them
> > > here or send a PR -- I will add them in.

> > > Cheers,

> > > On Tue, 07 May 2013, Daπid wrote:

> > > > On 7 May 2013 13:47, Sebastian Berg <sebast...@sipsolutions.net> wrote:
> > > > > Indexing/assignment was the first thing I thought of too (also because
> > > > > fancy indexing/assignment really could use some speedups...). Other 
> > > > > then
> > > > > that maybe some timings for small arrays/scalar math, but that might 
> > > > > be
> > > > > nice for that GSoC project.

> > > > Why not going bigger? Ufunc operations on big arrays, CPU and memory 
> > > > bound.

> > > > Also, what about interfacing with other packages? It may increase the
> > > > compiling overhead, but I would like to see Cython in action (say,
> > > > only last version, maybe it can be fixed).
> > > > _______________________________________________
> > > > NumPy-Discussion mailing list
> > > > NumPy-Discussion@scipy.org
> > > > http://mail.scipy.org/mailman/listinfo/numpy-discussion
-- 
Yaroslav O. Halchenko, Ph.D.
http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org
Senior Research Associate,     Psychological and Brain Sciences Dept.
Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755
Phone: +1 (603) 646-9834                       Fax: +1 (603) 646-1419
WWW:   http://www.linkedin.com/in/yarik        
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to