Am 29.03.15 um 05:06 schrieb Steven D'Aprano:
I'm not one of those people who think that C is by definition the fastest
language conceivable. (People who believe this sometimes make an exception
for hand-crafted assembly, which is ironic since these days the best C
optimizing compilers can generate faster, tighter code than human assembly
programmers.) I know that program speed depends on the implementation of
the language, not necessarily the language itself. I know that Fortran can
beat C for numeric work, and that with a tiny fraction of the work put into
optimization that C has seen, modern languages like Scala, Eiffel, Haskell
and D can get to a factor of 2-6 times as slow as C. And I know that PyPy
has managed to beat C in some (quite restrictive, but not completely
unrealistic) benchmarks:

http://morepypy.blogspot.com.au/2011/02/pypy-faster-than-c-on-carefully-crafted.html
http://morepypy.blogspot.com.au/2011/08/pypy-is-faster-than-c-again-string.html


So beating C is not impossible.

Beating C is not impossible, but it is really hard. The compilation is usually excluded from such benchmarks. Then any langugage could in principle compile to the same instructions as the equivalent C program, and hence achieve the same speed. The reason that C is such a tough competitor is
        
1) the tools have been optimized over 40 years - just compare the sourcecode of gcc with a minimalistic compiler such as tcc, and then compare the execution speed of the programs compiled by it

2) the langugage is very easy to compile, since the programmer is forced to annotate the sourcecode with data types that usually map 1:1 into registers

If you can do type deduction for your variables that lead to the same result as the C program, then you can also compile to the same efficient code. In most cases the hand-crafted C program is more restricted, because the programmer does not make a promise to the compiler that things will never overflow (e.g., int is restricted to 32 bits instead of arbitrary integers).

Defeating a C compiler is possible in (rare) cases, I wouldn't really count the examples you've posted, since they don't actually compute anything useful. However, for instance the Intel C compiler is able to replace a loop like this:

for (int i=0; i<N; i++) {
        for (int j=0; j<N; j++) {
                for (int k=0; k<N; k++) {
                        z[i][k]+=x[i][j]*y[j][k];
                }
        }
}

with a call to an optimized BLAS matrix multiplication in "some" (=impractical) cases. This can easily speed up the code by a factor of 100. It would be much easier for a compiled numpy-language to do this for z=x*y and also correctly treat slices etc. Still practical compilers that do that and achieve *on average* faster programs than hand-crafted C do not exist.

Just look at the contrived PyPy benchmarks, for a very tuned selected sample you can be almost twice as fast as C - for a random algorithm like the Sudoku solver, not specifically tuned, C is 30x faster. It still boils down to the classic rules: static unboxed types and static or preallocated memory makes your code fast. In many cases, though, programming in Python can free programmer time, which can lead in turn to better algorithms that outperform the wisdom of the C compiler.


But, when you tell me that your very own personal interpreted language,
which I assume nobody else has worked on, is 40% faster than optimized C,
my first reaction is to expect that you've probably made a mistake

I agree with Chris that this was a misunderstanding.


        Christian
--
https://mail.python.org/mailman/listinfo/python-list

Reply via email to