On 12 Nov, 18:33, J Kenneth King <ja...@agentultra.com> wrote:

> Where Python might get hit *as a language* is that the Python programmer
> has to drop into C to implement optimized data-structures for dealing
> with the kind of IO that would slow down the Python interpreter.  That's
> why we have numpy, scipy, etc.

That's not a Python specific issue. We drop to SciPy/NumPy for certain
compute-bound tasks that operates on vectors. If that does not help,
we drop further down to Cython, C or Fortran. If that does not help,
we can use assembly. In fact, if we use SciPy linked against GotoBLAS,
a lot of compute-intensive work solving linear algebra is delegated to
hand-optimized assembly.

With Python we can stop at the level of abstraction that gives
acceptable performance. When using C, we start out at a much lower
level. The principle that premature optimization is the root of all
evil applies here: Python code that is fast enough is fast enough. It
does not matter that hand-tuned assembly will be 1000 times faster. We
can direct our optimization effort to the parts of the code that needs
it.




-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to