I made the following performance test, which adds 10^9 Double’s
on Linux with the latest dmd compiler in the Eclipse IDE and with
the Gdc-Compiler also on Linux. Then the same test was done with
C++ on Linux and with Scala in the Java ecosystem on Linux. All
the testing was done on the same PC.
The results for one addition are:
D-DMD: 3.1 nanoseconds
D-GDC: 3.8 nanoseconds
C++: 1.0 nanoseconds
Scala: 1.0 nanoseconds
D-Source:
import std.stdio;
import std.datetime;
import std.string;
import core.time;
void main() {
run!(plus)( 1000*1000*1000 );
}
class C {
}
string plus( int steps ) {
double sum = 1.346346;
immutable double p0 = 0.0045;
immutable double p1 = 1.00045452-p0;
auto b = true;
for( int i=0; i<steps; i++){
switch( b ){
case true :
sum += p0;
break;
default:
sum += p1;
break;
}
b = !b;
}
return (format("%s %f","plus\nLast: ", sum) );
// return ("plus\nLast: ", sum );
}
void run( alias func )( int steps )
if( is(typeof(func(steps)) == string)) {
auto begin = Clock.currStdTime();
string output = func( steps );
auto end = Clock.currStdTime();
double nanotime = toNanos(end-begin)/steps;
writeln( output );
writeln( "Time per op: " , nanotime );
writeln( );
}
double toNanos( long hns ) { return hns*100.0; }
Compiler settings for D:
dmd -c
-of.dub/build/application-release-nobounds-linux.posix-x86-dmd-DF74188E055ED2E8ADD9C152107A632F/first.o
-release -inline -noboundscheck -O -w -version=Have_first
-Isource source/perf/testperf.d
gdc ./source/perf/testperf.d -frelease -o testperf
So what is the problem ? Are the compiler switches wrong ? Or is
D on the used compilers so slow ? Can you help me.
Thomas