On 01.03.2023 15:15, René Jansen wrote:
Well yes - that is really apples and oranges, and thanks for proving my point.

Numpy leverages hand tuned assembly (BLAS) with hinting for different chip 
levels and architectures, and the difference with plain python is shown 
here:https://en.algorithmica.org/hpc/complexity/languages/  You can of course 
integrate OpenBLAS into everything.

... cut ...

The above chapter from the above URL has an interesting Takeaway at the end of that chapter, worthwhile citing in the context of this thread:


         # 
<https://en.algorithmica.org/hpc/complexity/languages/#takeaway>Takeaway

   The key lesson here is that using a native, low-level language doesn’t 
necessarily give you
   performance; but it does give you /control/ over performance.

   Complementary to the “N operations per second” simplification, many 
programmers also have a
   misconception that using different programming languages has some sort of 
multiplier on that
   number. Thinking this way and comparing languages
   <https://benchmarksgame-team.pages.debian.net/benchmarksgame/index.html> in 
terms of performance
   doesn’t make much sense: programming languages are fundamentally just tools 
that take away
   /some/ control over performance in exchange for convenient abstractions. 
Regardless of the
   execution environment, it is still largely a programmer’s job to use the 
opportunities that the
   hardware provides.

---rony

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to