Re: Ternary Search Trees

2009-04-24 Thread Robert Fraser

Thanks for reading through it!

bearophile wrote:

Robert Fraser:

Attached, if you don't mind NO comments.


This is your code, I can see you have not used the union trick to reduce the 
memory used by each node that I suggested you:

private static struct TreeNode
{
TreeNode* left;
TreeNode* right;
TreeNode* mid;
K split;
K[] key;
V value;
}


How does the union trick work exactly?


And it seems all nodes contain a value, not just the leaves.


I'm assuming that V.sizeof == (void*).sizeof. There's actually 12 bytes 
that are unused on all but nodes containing values, since the key is 
also cached in there. This could eliminated by keeping track of the 
current stack in the opApply methods, but that slows down iteration.


Non-leaf nodes can also have values. For example, if you have "var" = 5 
and "var2" = 10, "r" will have a value and a mid.



the compiler wasn't unrolling the tail call<


I think still DMD isn't able to unroll tail calls.

Regarding the memory.d, in my implementation I have used a:
T[(MAX_BLOCK_BYTES / T.sizeof)] items;

you have used:
private void[BLOCK_SIZE] data = void;
memset(blk.data.ptr, 0, BLOCK_SIZE);

Not setting that void array (of uint? ubyte?) to void the compiler sets it to 
zero with a memset, so I don't understand what you have gained.


void[SIZE]; fails with an obscure error message... you need to 
initialize it to void.



And allocating an array of nodes offers a GC a chance to know what's inside the 
block of memory, so it's more compatible with better future GCs (it may be 
better even now).
It also allows for safer initializations of the struct fields, for example on 
some CPUs null can be != 0


I agree, but I wanted a generic construct that could also be used for 
allocating class instances.



Later I'll try to adapt your code to Phobos1.


Thanks!


Re: Ternary Search Trees

2009-04-24 Thread bearophile
Robert Fraser:
> Attached, if you don't mind NO comments.

This is your code, I can see you have not used the union trick to reduce the 
memory used by each node that I suggested you:

private static struct TreeNode
{
TreeNode* left;
TreeNode* right;
TreeNode* mid;
K split;
K[] key;
V value;
}

And it seems all nodes contain a value, not just the leaves.

>the compiler wasn't unrolling the tail call<

I think still DMD isn't able to unroll tail calls.

Regarding the memory.d, in my implementation I have used a:
T[(MAX_BLOCK_BYTES / T.sizeof)] items;

you have used:
private void[BLOCK_SIZE] data = void;
memset(blk.data.ptr, 0, BLOCK_SIZE);

Not setting that void array (of uint? ubyte?) to void the compiler sets it to 
zero with a memset, so I don't understand what you have gained.

And allocating an array of nodes offers a GC a chance to know what's inside the 
block of memory, so it's more compatible with better future GCs (it may be 
better even now).
It also allows for safer initializations of the struct fields, for example on 
some CPUs null can be != 0

Later I'll try to adapt your code to Phobos1.

Bye,
bearophile


Re: Ternary Search Trees

2009-04-24 Thread Robert Fraser

bearophile wrote:

Robert Fraser:
I implemented a version of the TST you posted using Tango + D1... Here 
are my results:


Nice.
Is your TST a map? (not a set).


Yes, it's a map (for my use case).


Are you willing to show me your TST code? Your code is surely better than mine, 
and there can be few little things that can be tuned.
And are you also willing to add it to Phobos? (I'll port it to Phobos1, it's 
probably easy to do)?


Attached, if you don't mind NO comments.

I really want to port this: 
http://dasnar.sdf-eu.org/res/ctst-README.html ... The automatic 
balancing might make it a lot better if elements are inserted in order. 
It might be very easy by replacing the malloc/free calls with 
gc_malloc/gc_free calls.


The big winner here, though, appears to be 
D's built-in associative arrays. I thought they were supposed to be very 
slow, but Tango's implementation, at least, looks pretty good (without 
any re-hashing).


Try comparing the performance of built-in D AAs with Psyco+Python dicts, or a 
good C++ STL hash set/hash map.
D AAs are slow probably also because Phobos is statically compiled, they aren't 
a template.


I think Phobos1's AAs were pretty bad, but have you tried 
Tango's/druntimes? They're quite optimized (they're implemented as a 
hash with trees used for collisions).
/***
 * Copyright (c) 2008 Robert Fraser
 * 
 * All rights reserved. This program and the accompanying materials
 * are made available under the terms of the Eclipse Public License v1.0
 * which accompanies this distribution, and is available at
 * http://www.eclipse.org/legal/epl-v10.html
 
***/

module candy.util.array;

import tango.core.BitManip : bsr;
import tango.core.Traits : isReferenceType;
import tango.core.Memory : GC;
import tango.stdc.string : memcpy, memset;

public struct Array(T, size_t START_SIZE = 16, bool doScan = 
isReferenceType!(T),
alias Realloc = GC.realloc, alias Free = GC.free)
{
T* arr = null;
size_t len = 0;
size_t cap = 0;

public void opCatAssign(T v)
{
len++;
if(len >= cap)
grow(cap ? cap * 2 : START_SIZE);
arr[len - 1] = v;
}

public void opCatAssign(T[] v)
{
int newlen = len + v.length;
if(newlen > cap)
{
if(newlen < START_SIZE)
grow(START_SIZE);
else
grow(2 << bsr(newlen + 1)); // Next power of 2
}
memcpy(arr + len, v.ptr, (newlen - len) * T.sizeof);
len = newlen;
}

public void opCatAssign(Array!(T) v)
{
opCatAssign(v.toArray());
}

public T[] toArray()
{
return arr[0 .. len];
}

public size_t length()
{
return len;
}

public T opIndex(size_t n)
{
assert(n < len);
return arr[n];
}

public void opIndexAssign(T v, size_t n)
{
assert(n < len);
arr[n] = v;
}

public T[] opSlice()
{
return toArray();
}

public T[] opSlice(size_t low, size_t high)
{
assert(low <= high);
assert(high < len);
return arr[low .. high];
}

public void opSliceAssign(T v)
{
arr[0 .. len] = v;
}

public void opSliceAssign(T v, size_t low, size_t high)
{
assert(low <= high);
assert(high < len);
arr[low .. high] = v;
}

public void opSliceAssign(T[] v, size_t low, size_t high)
{
assert(low <= high);
assert(high < len);
assert(v.length == high - low);
memcpy(arr + low, v.ptr, v.length * T.sizeof);
}

public void opSliceAssign(Array!(T) v, size_t low, size_t high)
{
opSliceAssign(v.toArray(), low, high);
}

public bool isEmpty()
{
return len == 0;
}

public void clear()
{
len = 0;
}

public void free()
{
if(arr)
{
Free(arr);
arr = null;
}
len = 0;
}

public void zero()
{
if(arr)
{
memset(arr, 0, cap * T.sizeof);
  

Re: Ternary Search Trees

2009-04-24 Thread bearophile
Robert Fraser:
> I implemented a version of the TST you posted using Tango + D1... Here 
> are my results:

Nice.
Is your TST a map? (not a set).
Are you willing to show me your TST code? Your code is surely better than mine, 
and there can be few little things that can be tuned.
And are you also willing to add it to Phobos? (I'll port it to Phobos1, it's 
probably easy to do)?


> The big winner here, though, appears to be 
> D's built-in associative arrays. I thought they were supposed to be very 
> slow, but Tango's implementation, at least, looks pretty good (without 
> any re-hashing).

Try comparing the performance of built-in D AAs with Psyco+Python dicts, or a 
good C++ STL hash set/hash map.
D AAs are slow probably also because Phobos is statically compiled, they aren't 
a template.


>  However, the times here are very low (the word file is only 
> 1.1MB, and my system is fairly fast, though this is memory-dependent not 
> CPU-dependent)... I'll try with a larger word list & see what results I get.

With modern CPUs lot of tests become too much fast :-)

Bye,
bearophile


Re: Ternary Search Trees

2009-04-24 Thread Robert Fraser

Robert Fraser wrote:

bearophile wrote:
Does someone has some need for Ternary Search Trees into Phobos (for 
D1. And eventually later for D2 too)?
TSTs allow to find keys, key prefixes, or even keys with holes. Keys 
are arrays of T, where T is the template type.
They can be designed to store the keys alone, or as an associative 
data structure.


With some benchmarks I have seen that a simple TST implementation is 
about as fast as the built-in AAs of D (but much slower than Python 
dicts).


Bye,
bearophile


I implemented a version of the TST you posted using Tango + D1... Here 
are my results:


TestPartMean  MedianMax
  -----
TST Insertion   0.04280.03380.0886
TST Iteration   0.00220.00220.0024
TST Lookup  0.02250.02230.0237
HashMap Insertion   0.06210.04210.2205
HashMap Iteration   0.00350.00340.0036
HashMap Lookup  0.01690.01680.0184
TreeMap Insertion   0.10450.10410.1058
TreeMap Iteration   0.00410.00410.0044
TreeMap Lookup  0.08950.08920.0917
AssocArray  Insertion   0.02620.02620.0268
AssocArray  Iteration   0.00150.00150.0016
AssocArray  Lookup  0.01300.01290.0132

(TreeMap and HashMap are in tango,util.container, AssocArray is the 
built-in D associative array; testing code is attached. Compiled with -O 
-release -inline using DSSS 0.78 and DMD 1.043 on Windows XP x86 SP3).


Indeed, TSTs seem to beat Red-Black BSTs (the implementation Tango's 
TreeMap uses) quite handily, and have faster insert times than hash 
maps, though slower lookup. The big winner here, though, appears to be 
D's built-in associative arrays. I thought they were supposed to be very 
slow, but Tango's implementation, at least, looks pretty good (without 
any re-hashing).


For order-sensitive collections, I'm definitely using TSTs; I'm sold on 
the concept. Not only are they faster, allowing prefix search could be 
very useful. However, the times here are very low (the word file is only 
1.1MB, and my system is fairly fast, though this is memory-dependent not 
CPU-dependent)... I'll try with a larger word list & see what results I 
get.




Results are similar on a much larger word list:
TestPartMean  MedianMax
  -----
TST Insertion   0.19060.16300.2495
TST Iteration   0.00320.00310.0035
TST Lookup  0.16500.16510.1662
HashMap Insertion   0.16370.15040.2679
HashMap Iteration   0.00290.00280.0030
HashMap Lookup  0.12120.12100.1241
TreeMap Insertion   0.61330.61200.6276
TreeMap Iteration   0.00350.00350.0035
TreeMap Lookup  0.63100.62920.6446
AssocArray  Insertion   0.11310.11290.1154
AssocArray  Iteration   0.00140.00140.0016
AssocArray  Lookup  0.09390.09280.1045


Re: Ternary Search Trees

2009-04-24 Thread Robert Fraser

bearophile wrote:

Does someone has some need for Ternary Search Trees into Phobos (for D1. And 
eventually later for D2 too)?
TSTs allow to find keys, key prefixes, or even keys with holes. Keys are arrays 
of T, where T is the template type.
They can be designed to store the keys alone, or as an associative data 
structure.

With some benchmarks I have seen that a simple TST implementation is about as 
fast as the built-in AAs of D (but much slower than Python dicts).

Bye,
bearophile


I implemented a version of the TST you posted using Tango + D1... Here 
are my results:


TestPartMean  MedianMax
  -----
TST Insertion   0.04280.03380.0886
TST Iteration   0.00220.00220.0024
TST Lookup  0.02250.02230.0237
HashMap Insertion   0.06210.04210.2205
HashMap Iteration   0.00350.00340.0036
HashMap Lookup  0.01690.01680.0184
TreeMap Insertion   0.10450.10410.1058
TreeMap Iteration   0.00410.00410.0044
TreeMap Lookup  0.08950.08920.0917
AssocArray  Insertion   0.02620.02620.0268
AssocArray  Iteration   0.00150.00150.0016
AssocArray  Lookup  0.01300.01290.0132

(TreeMap and HashMap are in tango,util.container, AssocArray is the 
built-in D associative array; testing code is attached. Compiled with -O 
-release -inline using DSSS 0.78 and DMD 1.043 on Windows XP x86 SP3).


Indeed, TSTs seem to beat Red-Black BSTs (the implementation Tango's 
TreeMap uses) quite handily, and have faster insert times than hash 
maps, though slower lookup. The big winner here, though, appears to be 
D's built-in associative arrays. I thought they were supposed to be very 
slow, but Tango's implementation, at least, looks pretty good (without 
any re-hashing).


For order-sensitive collections, I'm definitely using TSTs; I'm sold on 
the concept. Not only are they faster, allowing prefix search could be 
very useful. However, the times here are very low (the word file is only 
1.1MB, and my system is fairly fast, though this is memory-dependent not 
CPU-dependent)... I'll try with a larger word list & see what results I get.
module candy.misc.dstest;

import tango.io.stream.Lines;
import tango.io.device.File;
import tango.time.StopWatch;
import tango.io.Stdout;
import tango.core.Memory;
import tango.core.Array;

import tango.util.collection.HashMap;
import tango.util.collection.TreeMap;

import candy.util.array;
import candy.util.tst;

public struct TestPart
{
char[] name;
void delegate() run;
}

public class Test
{
public char[] name;
public TestPart[] parts;
}

public static T median(T)(T[] values)
{
assert(values.length > 0);
uint mid = values.length / 2;
return (values.length % 2 == 0) ? ((values[mid - 1] + values[mid]) / 2) 
: values[mid];
}

public static T mean(T)(T[] values)
{
T total = 0;
foreach(value; values)
total += value;
return total / values.length;
}

public void doTest(char[] sectionName, uint iterations, Test[] tests)
{
Stdout.format("Running tests for {0}...", sectionName).newline();
Stdout("TestPartMean  Median
Max").newline();
Stdout("  --
---").newline();
foreach(test; tests)
{
double[][] times = new double[][test.parts.length];
foreach(j, part; test.parts)
times[j] = new double[iterations];
for(int i = 0; i < iterations; i++)
{
foreach(j, part; test.parts)
{
sw.start();
part.run();
times[j][i] = sw.stop();
}
GC.collect();
}
foreach(j, part; test.parts)
{
times[j].sort();

Stdout.format("{0,-20}{1,-20}{2,-10:f4}{3,-10:f4}{4,-10:f4}", test.name, 
part.name, mean(times[j]), median(times[j]), times[j][$ - 1]).newline();
}
}

}

private StopWatch sw;
public const uint SIZE = 1024 * 1000;
public const uint ITERS = 9;
public char[][] words, words2;

public int main()
{
Stdout.flush(true);

generateDictionary();
GC.collect();
Test[] dictTests;
dictTests ~= new DictTest!("TST", TernaryTree!(char, bool), "new 

Re: Ternary Search Trees

2009-04-18 Thread bearophile
This is the final version I am added to the dlibs:

/
To create a memory pool of a native item, like structs. Not thread-safe.
It allows to create new items (it performs block-allocations for speed), and 
then
free them all at once (and then you can create some more).
*/
struct MemoryPool(T, int MAX_BLOCK_BYTES=1 << 14) {
static assert(!is(T == class), "MemoryPool is designed for native data, not 
classes.");
static assert(MAX_BLOCK_BYTES >= 1,
  "MemoryPool: MAX_BLOCK_BYTES must be >= 1 bytes.");

struct Block {
static assert(MAX_BLOCK_BYTES >= T.sizeof,
  "MemoryPool: MAX_BLOCK_BYTES must be bigger than a T.");
static if ((T.sizeof * 5) > MAX_BLOCK_BYTES)
pragma(msg, "MemoryPool: Block is very small, so probably a 
MemoryPool isn't useful.");

T[(MAX_BLOCK_BYTES / T.sizeof)] items;
}

static Block*[] blocks;
static T* nextFree, lastFree; // a single int that keeps the number of 
items used
  // in the last block is enough, but it may 
lead to a
  // bit slower code.

/// Return pointer to a new item. Allocates memory if necessary.
static T* newItem() {
if (nextFree >= lastFree) { // then add new block.
// this whole new block gets scanned by the GC,
// even if you need only 1 item
blocks.length = blocks.length + 1;
blocks[$-1] = new Block;

nextFree = blocks[$-1].items.ptr;
lastFree = nextFree + Block.items.length;
}

return nextFree++;
}

/// Deallocate all items at once. You can then create new ones.
static void freeAll() {
foreach (block_ptr; blocks)
delete block_ptr;
blocks[] = null;
blocks.length = 0;
nextFree = null;
lastFree = null;
}

// This is like free() but doesn't deallocate memory, just cleans it up so
//   when you create new items there's no new memory allocation, and it's
//   faster. But to implement this other static members seem necessary.
//static void cleanMemory() {
//}
} // end of MemoryPool!()

unittest { // tests of MemoryPool!()
struct BinaryTreeNode {
BinaryTreeNode* left, right;
int data;

string toString() {
string nrepr(BinaryTreeNode* n) { return n is null ? "nil" : 
str(*n); }
return "<" ~ str(data) ~ ", " ~ nrepr(left)  ~ ", " ~ nrepr(right)  
~ ">";
}
}

MemoryPool!(BinaryTreeNode) BinaryTreeNodePool;

auto t = BinaryTreeNodePool.newItem();
t.data = 100;
t.left = BinaryTreeNodePool.newItem();
t.left.data = 7;
t.right = BinaryTreeNodePool.newItem();
t.right.data = 150;

assert(str(*t) == "<100, <7, nil, nil>, <150, nil, nil>>");
} // end tests of MemoryPool!()


Re: Ternary Search Trees

2009-04-17 Thread bearophile
Robert Fraser:
> Why is the default block size so freaking huge (16kb)?<

If the block is too much small you waste too much time allocating small blocks 
of memory (about as much time as you waste allocating the single tree nodes).
If the block is too much big you waste empty nodes in the last block, and over 
a certain size you don't gain much anyway. And too much big chunks don't fit in 
the data part of the L1 cache anyway (it's 32 KB on many CPUs).
>From experiments I have seen that usually having a block bigger than 64 KB 
>lets you gain very little. And having blocks less than 4-8 KB is a waste of 
>time. 16-32 KB is the faster and on a 2 GB RAM PC it doesn't waste much RAM. 
>If you want to save some RAM you can reduce that value at compile time.

Bye,
bearophile


Re: Ternary Search Trees

2009-04-17 Thread Robert Fraser

bearophile wrote:

OK, now the code works. Besude the TST struct I have created this helper struct 
that keeps a pool of tree nodes:

struct StructPool(TyStruct, int MAX_BLOCK_SIZE_BYTES=1 << 14) {
static assert(MAX_BLOCK_SIZE_BYTES >= 1);

struct Block {
// Block is not too much big nor too much small
const int MAXB = MAX_BLOCK_SIZE_BYTES / TyStruct.sizeof;
const int SIZE = MAXB >= 1 ? MAXB : 1;
TyStruct[StructPool.Block.SIZE] structs;
}

static TyStruct* next_free_struct_ptr, last_struct_ptr;
static Block*[] blocks;

static TyStruct* newStruct() {
if (StructPool.next_free_struct_ptr >= StructPool.last_struct_ptr) {
// then add new block

// this whole new block gets scanned by the GC even if you need 
only 1 node
StructPool.blocks.length = StructPool.blocks.length + 1;
StructPool.blocks[$-1] = new Block;

StructPool.next_free_struct_ptr = 
StructPool.blocks[$-1].structs.ptr;
StructPool.last_struct_ptr = StructPool.next_free_struct_ptr + 
Block.SIZE;

}

return StructPool.next_free_struct_ptr++;
}

/// deallocate blocks of structs
static void free() {
foreach (block_ptr; StructPool.blocks)
delete block_ptr;
StructPool.blocks[] = null;
StructPool.blocks.length = 0;
StructPool.next_free_struct_ptr = null;
StructPool.last_struct_ptr = null;
}
}

alias StructPool!(TST!(char)) TSTpool;

(I have used to static pointers, one single index to the last filled up struct 
is also enough.)
That newStruct() is fast, about as class allocations in Java :-)

It's not nice code yet, with that external alias and so on, but it seems to 
work and it almost halves the total allocation time of a big ternary search 
tree (but despite the improve cache coherence the positive lookups into such 
TST are about as fast as in a normally allocated TST, maybe because the D GC 
packs the nodes closely in memory anyway). So I think in certain situations it 
can be useful.

Now I'm trying to make the code nicer and less bug-prone, for example turning 
that StructPool struct into a template mixin that I can use to add such pool 
capability to other structs too that need to allocate many small structs and to 
deallocate them all at the same time. But so far I have had the same kinds 
errors I have shown in the two past posts.

Bye,
bearophile


Cool, thanks for posting this... Why is the default block size so 
freaking huge (16kb)?


Re: Ternary Search Trees

2009-04-17 Thread bearophile
OK, now the code works. Besude the TST struct I have created this helper struct 
that keeps a pool of tree nodes:

struct StructPool(TyStruct, int MAX_BLOCK_SIZE_BYTES=1 << 14) {
static assert(MAX_BLOCK_SIZE_BYTES >= 1);

struct Block {
// Block is not too much big nor too much small
const int MAXB = MAX_BLOCK_SIZE_BYTES / TyStruct.sizeof;
const int SIZE = MAXB >= 1 ? MAXB : 1;
TyStruct[StructPool.Block.SIZE] structs;
}

static TyStruct* next_free_struct_ptr, last_struct_ptr;
static Block*[] blocks;

static TyStruct* newStruct() {
if (StructPool.next_free_struct_ptr >= StructPool.last_struct_ptr) {
// then add new block

// this whole new block gets scanned by the GC even if you need 
only 1 node
StructPool.blocks.length = StructPool.blocks.length + 1;
StructPool.blocks[$-1] = new Block;

StructPool.next_free_struct_ptr = 
StructPool.blocks[$-1].structs.ptr;
StructPool.last_struct_ptr = StructPool.next_free_struct_ptr + 
Block.SIZE;

}

return StructPool.next_free_struct_ptr++;
}

/// deallocate blocks of structs
static void free() {
foreach (block_ptr; StructPool.blocks)
delete block_ptr;
StructPool.blocks[] = null;
StructPool.blocks.length = 0;
StructPool.next_free_struct_ptr = null;
StructPool.last_struct_ptr = null;
}
}

alias StructPool!(TST!(char)) TSTpool;

(I have used to static pointers, one single index to the last filled up struct 
is also enough.)
That newStruct() is fast, about as class allocations in Java :-)

It's not nice code yet, with that external alias and so on, but it seems to 
work and it almost halves the total allocation time of a big ternary search 
tree (but despite the improve cache coherence the positive lookups into such 
TST are about as fast as in a normally allocated TST, maybe because the D GC 
packs the nodes closely in memory anyway). So I think in certain situations it 
can be useful.

Now I'm trying to make the code nicer and less bug-prone, for example turning 
that StructPool struct into a template mixin that I can use to add such pool 
capability to other structs too that need to allocate many small structs and to 
deallocate them all at the same time. But so far I have had the same kinds 
errors I have shown in the two past posts.

Bye,
bearophile


Re: Ternary Search Trees

2009-04-17 Thread bearophile
To shorten the code further I have replaced the Block struct with a typedefined 
fixed-size array:

struct TST(T) {
// *** TST instance attributes here ***

const int BLOCKSIZE = ((1 << 16) / TST.sizeof) > 1 ? ((1 << 16) / 
TST.sizeof) : 1;

// usedLastBlock keeps the number of nodes used from the last block of 
blocks
// there's no need to keep the whole number of blocks, because its usage is
// slower and all the blocks but the last one are full anyway.
static int usedLastBlock;

typedef TST[BLOCKSIZE] Block;
static Block*[] blocks;

static TST* newNode() {
if (!TST.blocks.length || TST.usedLastBlock >= Block.length) {
// add new block
TST.blocks.length = blocks.length + 1;
// this whole new block gets scanned by the GC even if you need 
only 1 node
TST.blocks[$-1] = new Block; // this doesn't work, you can't new a 
static array
TST.usedLastBlock = 0;
}

return &(*(TST.blocks[$-1])[TST.usedLastBlock++]);
}

/// deallocate blocks of nodes
static void free() {
foreach (block_ptr; TST.blocks)
delete block_ptr;
TST.blocks.length = 0;
}


But beside the same bug as before:
struct test.TST!(char).TST no size yet for forward reference
it's even more buggy, because now you can't even new a Block. Oh, well.

Bye,
bearophile


Re: Ternary Search Trees

2009-04-17 Thread bearophile
Fawzi:
>so TST in the structure refers by default to the struct itself.<

Thank you, now I understand. I am slowly learning more.

-

Jason House:

> Why use a struct for TST?<

To give you a good answer I have done some tests using a file 
"english_words.txt", of about 2.1 MB that contains about 220_000 different 
words.

The following tiny program loads that file and splits it.
It requires 6.2 MB of RAM and about 0.09 seconds to run (once the file cache is 
warm. This thanks to the last patch to the GC that speeds this up a lot):

import std.stdio: writefln;
import std.string: split;
import std.file: read;
void main() {
auto words = split(cast(string)read("english_words.txt"));
}



The following program tests the TST (positive lookups only, no negative lookups 
yet), this is for struct-based TST!(char). For the TST-class-based version , 
you can just replace *root with root:

void main() {
auto words = split(cast(string)read("english_words.txt"));
auto root = new TST!(char);
root.bulk_add(words);
for (int i; i < 100; i++)
foreach (word; words)
if (!(word in *root))
throw new Exception("\"" ~ word ~ "\" not found");
}

The struct-based needs 3.95 seconds and 24.35 MB RAM to run.
The class-based (final methods) needs 4.03 seconds and 24.56 MB RAM (DMD v1.042 
on WinXP, 2 GHz CPU).

Both versions allocate 461_495 TST nodes. [Each TST struct node needs 16 bytes. 
Each TST object probably 24 bytes that I think the GC allocates as 32. But the 
actual memory used denies this twofold difference, I don't know why.]

I guess the speed difference comes mostly from the higher memory overhead of 
classes (because the more memory there is, the more the CPU caches have to 
work).



With a tiny change, adding "scope", so just the root of the tree gets allocated 
on the stack seems to reduce the running time of the class-based version from 
4.03 to 3.99 seconds.

I don't know why moving the root to the stack gives so much difference. It's 
not a small difference, because on a 2 GHz CPU ~0.04 seconds mean something 
like 80 million clock ticks. The root reference is traversed each time, so the 
removal of just this level of indirection may explain the time difference. A 
mammalian brain like mine isn't well equipped to understand something that 
performs billions of operations every second. I have taken a look at the ASM 
generated by the class-based with and without scope, but I have not understood 
the asm well enough:

void main() {
auto words = split(cast(string)read("english_words.txt"));
scope auto root = new TST!(char); // scope 
root.bulk_add(words);
for (int i; i < 100; i++)
foreach (word; words)
if (!(word in root))
throw new Exception("\"" ~ word ~ "\" not found");
}



This, with a struct-based TST, needs 3.96 seconds (0.48 s just building the 
tree):

void main() {
auto words = split(cast(string)read("english_words.txt"));
TST!(char) root;
root.bulk_add(words);
for (int i; i < 100; i++)
foreach (word; words)
if (!(word in root))
throw new Exception("\"" ~ word ~ "\" not found");
}



The following with a struct-based TST, needs 3.71 s, 13.68 MB (0.32 s just 
building, 0.31 s with no GC) (0.26 s just building, allocating memory with 
calloc, total 3.52 s, 13.37 MB):

TST[] nodes;
int nnode;

void main() {
nodes = new TST!(char)[461_495];
auto words = split(cast(string)read("english_words.txt"));
auto root = nodes[nnode++];
root.bulk_add(words);
for (int i; i < 100; i++)
foreach (word; words)
if (!(word in root))
throw new Exception("\"" ~ word ~ "\" not found");
}


Idem, but struct fields aligned to 1 byte: 3.89 s, 12.72 MB.  (0.48 s just 
building), (12.46 MB allocating memory with calloc)



Without align, using std.gc.malloc + hasNoPointers + std.c.memset: 3.53 s total.

void main() {
disable();
TST.nodes = cast(TST*)gcmalloc(461_495 * TST.sizeof);
hasNoPointers(TST.nodes);
memset(TST.nodes, 0, 461_495 * TST.sizeof);
auto words = split(cast(string)read("english_words.txt"));
auto root = TST.nodes[TST.nnode++];
root.bulk_add(words);
for (int i; i < 100; i++)
foreach (word; words)
if (!(word in root))
throw new Exception("\"" ~ word ~ "\" not found");
}

Of course you can add static methods to the struct to create the array, create 
a new node moving an index forward, and freeing the whole array, plus static 
fields of the array and #nodes used so far.



Once tree nodes are items of an array, you can use indexes to address node 
children, so if memory gets very tight you can even use 24-bit integers as 
indexes:
http://www.fantascienza.net/leonardo/so/dlibs/uint24.html

They are very slow, so in most situations you want to use ints:

struct TST(T, TyIn

Re: Ternary Search Trees

2009-04-16 Thread Robert Fraser

bearophile wrote:

Robert Fraser:
Hey, could you please post your implementation (assuming it's 
open-source?) I'd love to use them, but can't be bothered to implement 
it. Thanks!


It's just a translation to D of this Py code:
Info:
http://jeethurao.com/blog/?p=146
Code:
http://bitbucket.org/woadwarrior/trie/

Where necessary you can also follow the ideas from the original article here 
(but using an enum is MUCH better and it's also safer in C, to avoid aliasing):
http://www.ddj.com/windows/184410528

My implementation is about 57 times faster than the Python version and much 
faster than the Psyco version too.

You can also take a look here:
http://sourceforge.net/projects/ternary/

You can implement it with a _single_ template struct, filled with all the 
methods you want:

struct TST(T) {
TST* left, right; 
union {

  TST* mid;
  int index;
}
T item;

// methods here
}

Note that inside there you don't need TST!(T)*, TST* is probably enough, but I 
don't know why yet.

You can start with only the few basic methods, test them, and later when/if you 
want you can add more methods, taking ideas from here:
http://abc.se/~re/code/tst/tst_docs/index.html

I'd like to not show my code because it's not nice, it has no comments, tests, 
etc. But it's quite close to that Py code.

A problem with that template struct is that even if you add a opIn_r() method 
as I have done, then you have to do (note the star):
if (!(k in *ds))

I don't know if in D2 for structs you can use the syntax you use with classes:
if (!(k in ds))

Even better syntax is:
if (k !in ds)

In Python you write something better still:
if k not in ds:

Bye,
bearophile


Awesome; thanks


Re: Ternary Search Trees

2009-04-16 Thread Robert Fraser

Jason House wrote:

bearophile Wrote:


Robert Fraser:
Hey, could you please post your implementation (assuming it's 
open-source?) I'd love to use them, but can't be bothered to implement 
it. Thanks!

It's just a translation to D of this Py code:
Info:
http://jeethurao.com/blog/?p=146
Code:
http://bitbucket.org/woadwarrior/trie/

Where necessary you can also follow the ideas from the original article here 
(but using an enum is MUCH better and it's also safer in C, to avoid aliasing):
http://www.ddj.com/windows/184410528

My implementation is about 57 times faster than the Python version and much 
faster than the Psyco version too.

You can also take a look here:
http://sourceforge.net/projects/ternary/

You can implement it with a _single_ template struct, filled with all the 
methods you want:

struct TST(T) {
TST* left, right; 
union {

  TST* mid;
  int index;
}
T item;

// methods here
}

Note that inside there you don't need TST!(T)*, TST* is probably enough, but I 
don't know why yet.

You can start with only the few basic methods, test them, and later when/if you 
want you can add more methods, taking ideas from here:
http://abc.se/~re/code/tst/tst_docs/index.html

I'd like to not show my code because it's not nice, it has no comments, tests, 
etc. But it's quite close to that Py code.

A problem with that template struct is that even if you add a opIn_r() method 
as I have done, then you have to do (note the star):
if (!(k in *ds))

I don't know if in D2 for structs you can use the syntax you use with classes:
if (!(k in ds))

Even better syntax is:
if (k !in ds)

In Python you write something better still:
if k not in ds:

Bye,
bearophile


Why use a struct for TST? it requires you to use pointers everywhere and 
passing by value hardly makes any sense. A class (optionally with final methods 
for speed) would work a lot better. It even fixes your issue with opIn


I assume it saves some space (classes have a monitor and vtbl pointer 
which are un-needed here). For maximum efficiency, rather than mallocing 
individual nodes, you'd want to be using some sort of array.


Re: Ternary Search Trees

2009-04-16 Thread Jason House
bearophile Wrote:

> Robert Fraser:
> > Hey, could you please post your implementation (assuming it's 
> > open-source?) I'd love to use them, but can't be bothered to implement 
> > it. Thanks!
> 
> It's just a translation to D of this Py code:
> Info:
> http://jeethurao.com/blog/?p=146
> Code:
> http://bitbucket.org/woadwarrior/trie/
> 
> Where necessary you can also follow the ideas from the original article here 
> (but using an enum is MUCH better and it's also safer in C, to avoid 
> aliasing):
> http://www.ddj.com/windows/184410528
> 
> My implementation is about 57 times faster than the Python version and much 
> faster than the Psyco version too.
> 
> You can also take a look here:
> http://sourceforge.net/projects/ternary/
> 
> You can implement it with a _single_ template struct, filled with all the 
> methods you want:
> 
> struct TST(T) {
> TST* left, right; 
> union {
>   TST* mid;
>   int index;
> }
> T item;
> 
> // methods here
> }
> 
> Note that inside there you don't need TST!(T)*, TST* is probably enough, but 
> I don't know why yet.
> 
> You can start with only the few basic methods, test them, and later when/if 
> you want you can add more methods, taking ideas from here:
> http://abc.se/~re/code/tst/tst_docs/index.html
> 
> I'd like to not show my code because it's not nice, it has no comments, 
> tests, etc. But it's quite close to that Py code.
> 
> A problem with that template struct is that even if you add a opIn_r() method 
> as I have done, then you have to do (note the star):
> if (!(k in *ds))
> 
> I don't know if in D2 for structs you can use the syntax you use with classes:
> if (!(k in ds))
> 
> Even better syntax is:
> if (k !in ds)
> 
> In Python you write something better still:
> if k not in ds:
> 
> Bye,
> bearophile

Why use a struct for TST? it requires you to use pointers everywhere and 
passing by value hardly makes any sense. A class (optionally with final methods 
for speed) would work a lot better. It even fixes your issue with opIn


Re: Ternary Search Trees

2009-04-16 Thread Fawzi Mohamed

On 2009-04-16 18:19:36 +0200, bearophile  said:


[...]
struct TST(T) {
TST* left, right;
union {
  TST* mid;
  int index;
}
T item;

// methods here
}

Note that inside there you don't need TST!(T)*, TST* is probably 
enough, but I don't know why yet.


struct TST(T){ ... }

is syntactic sugar for

template TST(T){
 struct TST{ ...}
}

so TST in the structure refers by default to the struct itself.

Fawzi



Re: Ternary Search Trees

2009-04-16 Thread bearophile
Robert Fraser:
> Hey, could you please post your implementation (assuming it's 
> open-source?) I'd love to use them, but can't be bothered to implement 
> it. Thanks!

It's just a translation to D of this Py code:
Info:
http://jeethurao.com/blog/?p=146
Code:
http://bitbucket.org/woadwarrior/trie/

Where necessary you can also follow the ideas from the original article here 
(but using an enum is MUCH better and it's also safer in C, to avoid aliasing):
http://www.ddj.com/windows/184410528

My implementation is about 57 times faster than the Python version and much 
faster than the Psyco version too.

You can also take a look here:
http://sourceforge.net/projects/ternary/

You can implement it with a _single_ template struct, filled with all the 
methods you want:

struct TST(T) {
TST* left, right; 
union {
  TST* mid;
  int index;
}
T item;

// methods here
}

Note that inside there you don't need TST!(T)*, TST* is probably enough, but I 
don't know why yet.

You can start with only the few basic methods, test them, and later when/if you 
want you can add more methods, taking ideas from here:
http://abc.se/~re/code/tst/tst_docs/index.html

I'd like to not show my code because it's not nice, it has no comments, tests, 
etc. But it's quite close to that Py code.

A problem with that template struct is that even if you add a opIn_r() method 
as I have done, then you have to do (note the star):
if (!(k in *ds))

I don't know if in D2 for structs you can use the syntax you use with classes:
if (!(k in ds))

Even better syntax is:
if (k !in ds)

In Python you write something better still:
if k not in ds:

Bye,
bearophile


Re: Ternary Search Trees

2009-04-16 Thread Daniel Keep


Andrei Alexandrescu wrote:
> Robert Fraser wrote:
>> bearophile wrote:
>>> Does someone has some need for Ternary Search Trees into Phobos (for
>>> D1. And eventually later for D2 too)?
>>> TSTs allow to find keys, key prefixes, or even keys with holes. Keys
>>> are arrays of T, where T is the template type.
>>> They can be designed to store the keys alone, or as an associative
>>> data structure.
>>>
>>> With some benchmarks I have seen that a simple TST implementation is
>>> about as fast as the built-in AAs of D (but much slower than Python
>>> dicts).
>>>
>>> Bye,
>>> bearophile
>>
>> Hey, could you please post your implementation (assuming it's
>> open-source?) I'd love to use them, but can't be bothered to implement
>> it. Thanks!
> 
> BTW, anyone got a KD-tree implementation? I could use one.
> 
> Andrei

A random idea occurs: it might be interesting to put up a page somewhere
of "things we'd like in the standard library."  Provide details on a
rough API, functionality and license.  Might get some people bored on a
weekend dropping in code for the standard library.  :)

  -- Daniel


Re: Ternary Search Trees

2009-04-15 Thread bearophile
Andrei Alexandrescu:
> BTW, anyone got a KD-tree implementation? I could use one.

I think at the moment I have none, but a 2D/3D one deserves to be in the std 
lib, because it allows you to reduce the complexity of lot of geometric-based 
code.

Bye,
bearophile


Re: Ternary Search Trees

2009-04-15 Thread Andrei Alexandrescu

Robert Fraser wrote:

bearophile wrote:
Does someone has some need for Ternary Search Trees into Phobos (for 
D1. And eventually later for D2 too)?
TSTs allow to find keys, key prefixes, or even keys with holes. Keys 
are arrays of T, where T is the template type.
They can be designed to store the keys alone, or as an associative 
data structure.


With some benchmarks I have seen that a simple TST implementation is 
about as fast as the built-in AAs of D (but much slower than Python 
dicts).


Bye,
bearophile


Hey, could you please post your implementation (assuming it's 
open-source?) I'd love to use them, but can't be bothered to implement 
it. Thanks!


BTW, anyone got a KD-tree implementation? I could use one.

Andrei


Re: Ternary Search Trees

2009-04-15 Thread Robert Fraser

bearophile wrote:

Does someone has some need for Ternary Search Trees into Phobos (for D1. And 
eventually later for D2 too)?
TSTs allow to find keys, key prefixes, or even keys with holes. Keys are arrays 
of T, where T is the template type.
They can be designed to store the keys alone, or as an associative data 
structure.

With some benchmarks I have seen that a simple TST implementation is about as 
fast as the built-in AAs of D (but much slower than Python dicts).

Bye,
bearophile


Hey, could you please post your implementation (assuming it's 
open-source?) I'd love to use them, but can't be bothered to implement 
it. Thanks!


Re: Ternary Search Trees

2009-04-13 Thread Robert Fraser

bearophile wrote:

Does someone has some need for Ternary Search Trees into Phobos (for D1. And 
eventually later for D2 too)?
TSTs allow to find keys, key prefixes, or even keys with holes. Keys are arrays 
of T, where T is the template type.
They can be designed to store the keys alone, or as an associative data 
structure.

With some benchmarks I have seen that a simple TST implementation is about as 
fast as the built-in AAs of D (but much slower than Python dicts).

Bye,
bearophile


I'd love to see them! Got a Tango version? ;-P

Also, D's AAs aren't a great benchmark, since they are outperformed by 
even a basic hashtable implementation (compare them to 
tango.util.collection.HashMap). But TSTs support many more operations, 
so the fact that they're slower than BSTs/Hashtables is to be expected.


Ternary Search Trees

2009-04-13 Thread bearophile
Does someone has some need for Ternary Search Trees into Phobos (for D1. And 
eventually later for D2 too)?
TSTs allow to find keys, key prefixes, or even keys with holes. Keys are arrays 
of T, where T is the template type.
They can be designed to store the keys alone, or as an associative data 
structure.

With some benchmarks I have seen that a simple TST implementation is about as 
fast as the built-in AAs of D (but much slower than Python dicts).

Bye,
bearophile