Re: What wrong did i do? (key in hashtable is always null)

2009-12-31 Thread The Anh Tran

grauzone wrote:
Is your opCmp/toHash really called? Maybe the function signature is off, 
and dmd doesn't find the function. Just a guess, I don't really know 
how this D2 stuff works.


toHash + opCmp are called.
The awkward is that, most of those functions are copied  pasted from C. 
They work perfectly.


I suspect that it is a bug. I would like to know if someone else meet 
the same problem.


Re: What wrong did i do? (key in hashtable is always null)

2009-12-31 Thread The Anh Tran

bearophile wrote:

This was my version, maybe it solves some of your problems:
http://shootout.alioth.debian.org/debian/benchmark.php?test=knucleotidelang=gdcid=2

I haven't used my dlibs here, so for example that sort in the middle is long and ugly 
(and not fully correct, that opCmp doesn't compare accordingly both key and value, as the 
problem specs state: sorted by descending frequency and then ascending k-nucleotide 
key), in Python it becomes:
l = sorted(frequences.items(), reverse=True, key=lambda (seq,freq): (freq,seq))
With my dlibs it's similar. You can probably do something similar with Phobos2.

By the way, the formatting of your code needs improvements, reduce indentation 
length and format code in a more readable way.

Bye,
bearophile


Thanks for pointing out code format style. :)
Shootout site stop benching D lang = why wasting time in formating code 
for someone else.


I just curious in D's AA perf compared to C++ pb_ds::cc_hash_table. The 
newest C++ knucleotide using uint64 as key, not char[] anymore.


In my small test case, D's built-in AA has the same perf as C glib. 
That's 4-5 times slower that pb_ds::cc_hash_table. Moreover, i think 
that it has bug -.-


What wrong did i do? (key in hashtable is always null)

2009-12-30 Thread The Anh Tran
This is just a small D exercise. I port c++ knucleotide from 
shootout.alioth.debian.org



Issue 1:
If i manually listing hashtable contents, the key does exist in that ht.
But (key in hash_table) always yield null.
Worse, if i use: auto val = ht[key], an exception is thrown.

Problem code is from line 163 to 177.


Issue 2:
If I pass an AA (uint[ulong]) to a template function.
DMD complains that uint[ulong] is void.
How can i get the type of AA?

DMD 2.037. Linux Ubuntu.
Source code:
ftp://ftp.4utours.com/dualamd/Dlang/knu5.d
Sample data:
ftp://ftp.4utours.com/dualamd/Dlang/fa50k.txt

Thanks.


Re: lvalue - opIndexAssign - Tango

2009-03-14 Thread The Anh Tran

Daniel Keep wrote:


You could try the Tango IRC channel:
That, or the Tango forums:
http://dsource.org/projects/tango/forums

You can report problems with Tango via the ticket system:
http://dsource.org/projects/tango/report (New Ticket is down the
bottom of the page.)


I tried to register. But there is no confirmation mail sent to me for 2 
days. That's why i post here :(


-

Linux Ubuntu 8.10.
DMD v1.041 from digitalmars.com
GDC 4.2, download from ubuntu synaptic.
Tango 0.99.7. Zip package from 
http://www.dsource.org/projects/tango/wiki/SourceDownloads


---

import tango.io.Stdout;
import tango.util.container.HashMap;
void main()
{
auto hm = new HashMap!(uint, uint)();
}

dmd -w -g -debug test.d
Error:
warning -
../import/tango/util/container/Slink.d(352): Error: statement is not 
reachable
../import/tango/util/container/HashMap.d(13): template instance 
tango.util.container.HashMap.HashMap!(uint,uint) error instantiating




GDC tango regex has been fixed myself. It's just a compile flag.



import tango.io.Stdout;
import tango.core.Atomic;

void main()
{
uint x = 255;
uint y = atomicIncrement(x);

Atomic!(uint) at;
at.store(255);
uint c = at.increment();
}


gdc ./test.d -o ./test-tango-gdc -frelease -g -O3 -msse2 -march=native 
-mfpmath=sse -femit-templates=auto -fversion=Posix -fversion=Tango -lgtango


./hello.d:21: template 
tango.core.Atomic.Atomic!(uint).Atomic.store(msync ms = msync.seq) does 
not match any template declaration
./hello.d:21: template 
tango.core.Atomic.Atomic!(uint).Atomic.store(msync ms = msync.seq) 
cannot deduce template function from argument types (int)



dmd -g -release -O -inline ./test.d -of./test-tango-dmd
Here is disasm of atomicIncrement():
pushebp
mov ebp, esp
pusheax
mov eax, [ebp+var_4]
lock inc byte ptr [eax] --- wrong, must be dword ptr
mov eax, [eax]
mov esp, ebp
pop ebp
retn


lvalue - opIndexAssign - Tango

2009-03-13 Thread The Anh Tran

Hi,

When porting from c++ to D, i encounter this strange discrimination:
1. Built-in AA:
int[int] arr;
arr[123] += 12345;
arr[321]++;

2. Tango HashMap:
auto hm = new HashMap!(int, int)();
hm[123] += 12345; // error not lvalue
hm[123]++;  // error

D document says current opIndexAssign does not work as lvalue. But why 
can builtin AA can that? How can i copy builtin AA behaviour?


--

Forgive my noob, where is the place to ask question, report bug for Tango?

1. I can't compile D code using tango hashmap in debug mode:

import tango.util.container.HashMap;
void main()
{
auto hm = new HashMap!(uint, uint)();
}

 dmd -w -g -debug  hello.d // error

2. Compile D code using Tango Regex by GDC emit lots of link errors.

3. Bug in Tango atomicIncrement, atomicDecrement:

int task_done = 0;
atomicIncrement(task_done);

That function is compiled in asm:
lock inc byte ptr[task_done];
Which is wrong. It'll wrap to 0 at 255.
It should be: lock inc dword ptr[task_done];

4. There is no atomicAdd(ref original, int newvalue) family. GCC 
equivalence is __syn_fetch_and_add ...


CTFE - filling an array

2008-12-09 Thread The Anh Tran

Hi,
I would like to pre-create a double array, each element is calculated by 
a function.


//-
import std.stdio;
import std.conv;
import std.string;

template f(int N)
{
bool fn(double[] tb)
{
for (int i = 1; i  N; i++)
tb[i] = 1.0/i;
return true;
}
}

void main(string[] args)
{
const int N = 200;
static double[N] dd = void;
static auto tmp = f!(N).fn(dd);

int n = toInt(args[1]);

writefln(test array[%d] = %f, n, dd[n]);
}
//--

dmd.2021 says: cannot evaluate static auto tmp = f!(N).fn(dd); at 
compile-time.


How can i fix it?
Thanks.