Re: Memory RAM usage

2009-12-31 Thread grauzone

bearophile wrote:
(I have recently seen a 13X speedup in a not synthetic program just modifying how memory is used and reducing memory usage, keeping the same algorithm. I can show you an URL if you want). 


Yes, please do!


Do you know how can Free Pascal use so little RAM? Here in this nbody benchmark 
(a simple N body gravitational simulation) it seems to use less than half of 
the memory used by C, yet the C code is tight and clean enough, and both use 64 
bit floating point numbers:
http://shootout.alioth.debian.org/u32q/benchmark.php?test=nbody&lang=all&sort=kb
In other benchmarks memory usage of Free Pascal is not dramatically lower, but 
it's usually near the top of lower memory usage in all Shootout benchmarks.


No idea. I just know that FPC doesn't use GCC. I think it doesn't even 
link to libc! (I can't really confirm, ldd crashes on the executable 
produced by fpc.) Maybe it just avoids some constant overhead due to this.



Bye,
bearophile


Re: What wrong did i do? (key in hashtable is always null)

2009-12-31 Thread grauzone

The Anh Tran wrote:
This is just a small D exercise. I port c++ knucleotide from 
shootout.alioth.debian.org



Issue 1:
If i manually listing hashtable contents, the key does exist in that ht.
But (key in hash_table) always yield null.
Worse, if i use: "auto val = ht[key]", an exception is thrown.

Problem code is from line 163 to 177.


Issue 2:
If I pass an AA (uint[ulong]) to a template function.
DMD complains that uint[ulong] is void.
How can i get the type of AA?

DMD 2.037. Linux Ubuntu.
Source code:
ftp://ftp.4utours.com/dualamd/Dlang/knu5.d


Is your opCmp/toHash really called? Maybe the function signature is off, 
and dmd doesn't "find" the function. Just a guess, I don't really know 
how this D2 stuff works.



Sample data:
ftp://ftp.4utours.com/dualamd/Dlang/fa50k.txt

Thanks.


Re: What wrong did i do? (key in hashtable is always null)

2009-12-31 Thread The Anh Tran

grauzone wrote:
Is your opCmp/toHash really called? Maybe the function signature is off, 
and dmd doesn't "find" the function. Just a guess, I don't really know 
how this D2 stuff works.


toHash + opCmp are called.
The awkward is that, most of those functions are copied & pasted from C. 
They work perfectly.


I suspect that it is a bug. I would like to know if someone else meet 
the same problem.


Re: What wrong did i do? (key in hashtable is always null)

2009-12-31 Thread The Anh Tran

bearophile wrote:

This was my version, maybe it solves some of your problems:
http://shootout.alioth.debian.org/debian/benchmark.php?test=knucleotide&lang=gdc&id=2

I haven't used my dlibs here, so for example that sort in the middle is long and ugly 
(and not fully correct, that opCmp doesn't compare accordingly both key and value, as the 
problem specs state: "sorted by descending frequency and then ascending k-nucleotide 
key"), in Python it becomes:
l = sorted(frequences.items(), reverse=True, key=lambda (seq,freq): (freq,seq))
With my dlibs it's similar. You can probably do something similar with Phobos2.

By the way, the formatting of your code needs improvements, reduce indentation 
length and format code in a more readable way.

Bye,
bearophile


Thanks for pointing out code format style. :)
Shootout site stop benching D lang => why wasting time in formating code 
for someone else.


I just curious in D's AA perf compared to C++ pb_ds::cc_hash_table. The 
newest C++ knucleotide using uint64 as key, not char[] anymore.


In my small test case, D's built-in AA has the same perf as C glib. 
That's 4-5 times slower that pb_ds::cc_hash_table. Moreover, i think 
that it has bug -.-


Re: Memory RAM usage

2009-12-31 Thread bearophile
grauzone:
> Yes, please do!

http://leonardo-m.livejournal.com/91798.html

Bye,
bearophile


Re: Memory RAM usage

2009-12-31 Thread BCS

Hello grauzone,


bearophile wrote:


In other benchmarks memory usage of Free Pascal is not dramatically
lower, but it's usually near the top of lower memory usage in all
Shootout benchmarks.


No idea. I just know that FPC doesn't use GCC. I think it doesn't even
link to libc! (I can't really confirm, ldd crashes on the executable
produced by fpc.) Maybe it just avoids some constant overhead due to
this.



If that's the root, then it may well be a fictitious benefit because it may 
well have no effect on the resident set size (the amount of RAM actually 
used) during the main processing. All that it would be doing is to no include 
at all code that the C version never pulls off disk (executables lazy load 
on most systems).





Re: Is there a reason for default-int?

2009-12-31 Thread BCS

Hello Ary,


Don wrote:


BCS wrote:


Hello Ary,


Don wrote:


Phil Deets wrote:


On Mon, 28 Dec 2009 16:18:46 -0500, Simen kjaeraas
 wrote:


Apart from C legacy, is there a reason to assume anything we
don't
know what
is, is an int? Shouldn't the compiler instead say 'unknown type'
or
something
else that makes sense?

C++ abandoned default int. I think it would be best for D to do
the same.


D never had default int. When there's an error, the compiler just
has to choose *some* type, so that it doesn't crash .


It could be an Error type (that's not an alias for int type) that
don't start to spit errors everywhere and instead just blocks all
further errors on that type.


that poses an interesting question: what "type" does this this give?

int i;
char[] s;
int foo(int);
char[] foo(int,char[]);
int[] foo(char[],int);
auto whatType = foo(i ~ s, s);

i~s gives the error type but DMD could tell that as long as the
other args are correct, the only foo that works returns a char[] so
does the variable get the error type or char[]?


Error.


Exactly! You would get an error saying "i ~ s" cannot happen, then it
resolves to the Error type. Now resolution of "foo" is not done (or
yes: it defaults to Error) because it has an argument of type Error
(one less error in the console is shown). Since foo is error, whatType
is Error. Then if whatType is used it won't trigger errors. Etc.

You would get a single error in the precise position you need to
correct it, instead of one error hidden with other many unrelated
errors.



IIRC, all of the above is planned and in the works. 

The only point I was wondering about is should DMD attempt to resolve functions 
where args are of error types? My thought is that in some cases it could 
resolve it and detect real errors down the line. OTOH, the argument for this 
(that blind propagation of Error can disable type checking on large swaths 
of code, even across functions via template code and auto return) is also 
an argument for why it could be a disaster if it makes the wrong guess.





Floating point differences at compile-time

2009-12-31 Thread bearophile
I don't understand where the result differences come from in this code:

import std.math: sqrt, PI;
import std.stdio: writefln;

void main() {
  const double x1 = 3.0 * PI / 16.0;
  writefln("%.17f", sqrt(1.0 / x1));

  double x2 = 3.0 * PI / 16.0;
  writefln("%.17f", sqrt(1.0 / x2));

  real x3 = 3.0 * PI / 16.0;
  writefln("%.17f", sqrt(1.0 / x3));

  real x4 = 3.0L * PI / 16.0L;
  writefln("%.17f", sqrt(1.0L / x4));
}

Output with various D compilers:

DMD1:
1.30294003174111994
1.30294003174111972
1.30294003174111979
1.30294003174111979

DMD2:
1.30294003174111972
1.30294003174111972
1.30294003174111979
1.30294003174111979

LDC:
1.30294003174111994
1.30294003174111994
1.30294003174111972
1.30294003174111972

I'd like the compiler(s) to give more deterministic results here.

Bye,
bearophile