Hello everyone

I am Arink, computer science student and open source enthusiastic. This
year I am interested to work on project "Performance parity between numpy
arrays and Python scalars"[1].

I tried to adobt rald's work on numpy1.7[2] (which was done for numpy1.6
[3]).
Till now by avoiding
a) the uncessary Checking for floating point errors which is slow,
b) unnecessarily Creation / destruction of scalar array types

I am getting the speedup by ~ 1.05 times, which marginal offcourse. As in
project's describtion it is mention that ufunc look up code is slow and
inefficient.

Few questions
1. Does it has to check every single data type possible until if finds the
best match for the data that the operation is being performed on, or is
there better way to find the best possible match?
2. If yes, so where are bottle-necks? Is the checks for proper data types
are very expensive?



[1]http://projects.scipy.org/scipy/wiki/SummerofCodeIdeas
[2]https://github.com/arinkverma/numpy/compare/master...gsoc_performance
[3]http://article.gmane.org/gmane.comp.python.numeric.general/52480

-- 
Arink
Computer Science and Engineering
Indian Institute of Technology Ropar
www.arinkverma.in
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to