Walter Bright:
for (int j=0;j1e6-1;j++)
The j1e6-1 is a floating point operation. It should be redone as an int one:
j1_000_000-1
The syntax 1e6 can represent an integer value of one million as perfectly and
as precisely as 1_000_000, but traditionally in many languages the
kai wrote:
Here is a boiled down test case:
void main (string[] args)
{
double [] foo = new double [cast(int)1e6];
for (int i=0;i1e3;i++)
{
for (int j=0;j1e6-1;j++)
{
On 05/17/2010 01:15 AM, Walter Bright wrote:
bearophile wrote:
DMD compiler doesn't perform many optimizations,
This is simply false. DMD does an excellent job with integer and pointer
operations. It does a so-so job with floating point.
Interesting to note, relative to my earlier
bearophile wrote:
Walter Bright:
In my view, such switches are bad news, because:
The Intel compiler, Microsoft compiler, GCC and LLVM have a similar switch
(fp:fast in the Microsoft compiler, -ffast-math on GCC, etc). So you might
send your list of comments to the devs of each of those four
Walter Bright:
is not done because of roundoff error. Also,
0 * x = 0
is also not done because it is not a correct replacement if x is a NaN.
I have done a little experiment, compiling this D1 code with LDC:
import tango.stdc.stdio: printf;
void main(char[][] args) {
double x =
Walter Bright wrote:
Don wrote:
bearophile wrote:
kai:
Any ideas? Am I somehow not hitting a vital compiler optimization?
DMD compiler doesn't perform many optimizations, especially on
floating point computations.
More precisely:
In terms of optimizations performed, DMD isn't too far
On Fri, 14 May 2010 12:40:52 -0400, bearophile bearophileh...@lycos.com
wrote:
Steven Schveighoffer:
In C/C++, the default value for doubles is 0.
I think in C and C++ the default value for doubles is uninitialized
(that is anything).
You are probably right. All I did to figure this
Hello Don,
The most glaring limitation of the FP optimiser is that it seems to
never keep values in the FP stack. So that it will often do:
FSTP x
FLD x
instead of FST x
Fixing this would probably give a speedup of ~20% on almost all FP
code, and would unlock the path to further optimisation.
Walter Bright:
In my view, such switches are bad news, because:
The Intel compiler, Microsoft compiler, GCC and LLVM have a similar switch
(fp:fast in the Microsoft compiler, -ffast-math on GCC, etc). So you might send
your list of comments to the devs of each of those four compilers.
I have
strtr wrote:
== Quote from Don (nos...@nospam.com)'s article
strtr wrote:
== Quote from bearophile (bearophileh...@lycos.com)'s article
But the bigger problem in your code is that you are performing operations on
NaNs (that's the default initalization of FP values in D), and operations on
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Jérôme M. Berger wrote:
div0 wrote:
Jérôme M. Berger wrote:
That depends. In C/C++, the default value for any global variable
is to have all bits set to 0 whatever that means for the actual data
type.
No it's not, it's always uninitialized.
div0 d...@users.sourceforge.net wrote:
Jérôme M. Berger wrote:
That depends. In C/C++, the default value for any global variable
is to have all bits set to 0 whatever that means for the actual data
type.
Ah, I only do C++, where the standard is to not initialise.
No, in C++ all *global or
div0 wrote:
Jérôme M. Berger wrote:
div0 wrote:
Jérôme M. Berger wrote:
That depends. In C/C++, the default value for any global variable
is to have all bits set to 0 whatever that means for the actual data
type.
No it's not, it's always uninitialized.
According to the C89
Don wrote:
bearophile wrote:
kai:
Any ideas? Am I somehow not hitting a vital compiler optimization?
DMD compiler doesn't perform many optimizations, especially on
floating point computations.
More precisely:
In terms of optimizations performed, DMD isn't too far behind gcc. But
it
bearophile wrote:
DMD compiler doesn't perform many optimizations,
This is simply false. DMD does an excellent job with integer and pointer
operations. It does a so-so job with floating point.
There are probably over a thousand optimizations at all levels that dmd does
with integer and
Walter Bright:
This is simply false. DMD does an excellent job with integer and pointer
operations. It does a so-so job with floating point.
There are probably over a thousand optimizations at all levels that dmd does
with integer and pointer code.
You are of course right, I understand your
On 5/16/2010 4:15 PM, Walter Bright wrote:
bearophile wrote:
DMD compiler doesn't perform many optimizations,
This is simply false. DMD does an excellent job with integer and pointer
operations. It does a so-so job with floating point.
There are probably over a thousand optimizations at
bearophile wrote:
kai:
Any ideas? Am I somehow not hitting a vital compiler optimization?
DMD compiler doesn't perform many optimizations, especially on floating point
computations.
More precisely:
In terms of optimizations performed, DMD isn't too far behind gcc. But
it performs almost
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Jérôme M. Berger wrote:
That depends. In C/C++, the default value for any global variable
is to have all bits set to 0 whatever that means for the actual data
type.
No it's not, it's always uninitialized.
Visual studio will initialise
div0 wrote:
Jérôme M. Berger wrote:
That depends. In C/C++, the default value for any global variable
is to have all bits set to 0 whatever that means for the actual data
type.
No it's not, it's always uninitialized.
According to the C89 standard and onwards it *must* be
strtr wrote:
== Quote from bearophile (bearophileh...@lycos.com)'s article
But the bigger problem in your code is that you are performing operations on
NaNs (that's the default initalization of FP values in D), and operations on
NaNs
are usually quite slower.
I didn't know that. Is it the
== Quote from Don (nos...@nospam.com)'s article
strtr wrote:
== Quote from bearophile (bearophileh...@lycos.com)'s article
But the bigger problem in your code is that you are performing operations
on
NaNs (that's the default initalization of FP values in D), and operations
on NaNs
Steven Schveighoffer wrote:
double [] foo = new double [cast(int)1e6];
foo[] = 0;
I've discovered that this is the equivalent of the last line above:
foo = 0;
I don't see it in the spec. Is that an old or an unintended feature?
Ali
Ali Çehreli acehr...@yahoo.com wrote:
Steven Schveighoffer wrote:
double [] foo = new double [cast(int)1e6];
foo[] = 0;
I've discovered that this is the equivalent of the last line above:
foo = 0;
I don't see it in the spec. Is that an old or an unintended feature?
Looks
Simen kjaeraas wrote:
Ali Çehreli acehr...@yahoo.com wrote:
Steven Schveighoffer wrote:
double [] foo = new double [cast(int)1e6];
foo[] = 0;
I've discovered that this is the equivalent of the last line above:
foo = 0;
I don't see it in the spec. Is that an old or an
Ali Ãehreli:
I don't see it in the spec. Is that an old or an unintended feature?
It's a compiler bug, don't use that bracket less syntax in your programs.
Don is fighting to fix such problems (and I have written several posts and bug
reports on that stuff).
Bye,
bearophile
On Fri, 14 May 2010 02:38:40 +, kai wrote:
Hello,
I was evaluating using D for some numerical stuff. However I was
surprised to find that looping array indexing was not very speedy
compared to alternatives (gcc et al). I was using the DMD2 compiler on
mac and windows, with -O
On Fri, 14 May 2010 06:31:29 +, Lars T. Kyllingstad wrote:
void main ()
{
double[] foo = new double [cast(int)1e6]; double[] slice1 =
foo[0 .. 999_998];
double[] slice2 = foo[1 .. 999_999];
for (int i=0;i1e3;i++)
{
// BAD,
kai:
I was evaluating using D for some numerical stuff.
For that evaluation you probably have to use the LDC compiler, that is able to
optimize better.
void main (string[] args)
{
double [] foo = new double [cast(int)1e6];
for (int i=0;i1e3;i++)
On Fri, 14 May 2010 07:32:54 -0400, Steven Schveighoffer wrote:
On Fri, 14 May 2010 02:31:29 -0400, Lars T. Kyllingstad
pub...@kyllingen.nospamnet wrote:
On Fri, 14 May 2010 02:38:40 +, kai wrote:
I was using the DMD2 compiler on
mac and windows, with -O -release.
1. Have you
On Thu, 13 May 2010 22:38:40 -0400, kai k...@nospam.zzz wrote:
Hello,
I was evaluating using D for some numerical stuff. However I was
surprised to
find that looping array indexing was not very speedy compared to
alternatives (gcc et al). I was using the DMD2 compiler on mac and
windows,
Thanks for the help all!
2. Can you use vector operations? If the example you gave is
representative of your specific problem, then you can't because you are
adding overlapping parts of the array. But if you are doing operations
on separate arrays, then array operations will be *much*
== Quote from bearophile (bearophileh...@lycos.com)'s article
But the bigger problem in your code is that you are performing operations on
NaNs (that's the default initalization of FP values in D), and operations on
NaNs
are usually quite slower.
I didn't know that. Is it the same for inf?
I
kai:
I was scared off by the warning that D 2.0 support is experimental.
LDC is D1 still, mostly :-(
And at the moment it uses LLVM 2.6.
LLVM 2.7 contains a new optimization that can improve that code some more.
Good to know, thanks (thats actually a great feature for scientists!).
In
bearophile wrote:
kai:
I was scared off by the warning that D 2.0 support is experimental.
LDC is D1 still, mostly :-(
And at the moment it uses LLVM 2.6.
LLVM 2.7 contains a new optimization that can improve that code some more.
Good to know, thanks (thats actually a great feature
35 matches
Mail list logo