Re: Working functionally with third party libraries

2015-07-18 Thread Kagamin via Digitalmars-d-learn
On Saturday, 18 July 2015 at 08:03:56 UTC, Jarl André Hübenthal 
wrote:
Its simple. In most cases you do an advanced aggregated search 
in mongo, and what you get is then a mongocursor. Lets say I am 
retrieving all projects for a given customer where the project 
is started.. I really am in no interest of lazily evaluating 
this result, because I want to return this data to the client 
(browser) immediately.


How big is the slowdown you notice for lazy processing? Lazy 
processing is believed to be faster because it's less resource 
consuming.


And lets say I am in a prototype phase where i haven't yet 
implemented all those nasty mongo queries, I want to be able to 
filter, map and reduce the result and work with arrays not some 
sort of non evaluated lazy MapResult.


I believe those algorithms were written to work on lazy ranges. 
What makes you think they can't do that?


Re: Working functionally with third party libraries

2015-07-18 Thread via Digitalmars-d-learn

On Saturday, 18 July 2015 at 09:18:14 UTC, Kagamin wrote:
On Saturday, 18 July 2015 at 08:03:56 UTC, Jarl André Hübenthal 
wrote:
Its simple. In most cases you do an advanced aggregated search 
in mongo, and what you get is then a mongocursor. Lets say I 
am retrieving all projects for a given customer where the 
project is started.. I really am in no interest of lazily 
evaluating this result, because I want to return this data to 
the client (browser) immediately.


How big is the slowdown you notice for lazy processing? Lazy 
processing is believed to be faster because it's less resource 
consuming.


And lets say I am in a prototype phase where i haven't yet 
implemented all those nasty mongo queries, I want to be able 
to filter, map and reduce the result and work with arrays not 
some sort of non evaluated lazy MapResult.


I believe those algorithms were written to work on lazy ranges. 
What makes you think they can't do that?


I don't understand where you are going with this. I have solved 
my problem. Laziness is good for lets say take 5 out of infinite 
results. When you ask for a complete list and want the complete 
list, you take all. In clojure you actually say that, doall. In 
D .array does the same thing. Converts lazy to non lazy.


Re: Working functionally with third party libraries

2015-07-18 Thread via Digitalmars-d-learn

On Friday, 17 July 2015 at 12:59:24 UTC, Kagamin wrote:
On Friday, 17 July 2015 at 09:07:29 UTC, Jarl André Hübenthal 
wrote:
Or loop it. But its pretty nice to know that there is laziness 
in D, but when I query mongo I expect all docs to be 
retrieved, since there are no paging in the underlying 
queries? Thus, having a lazy functionality on top of non lazy 
db queries seem a bit off dont you think?


From the client point of view db is sort of lazy: data is 
received from server as needed. Why would you want to put all 
data into an array before processing it? Why can't you process 
it from the range directly?


Its simple. In most cases you do an advanced aggregated search in 
mongo, and what you get is then a mongocursor. Lets say I am 
retrieving all projects for a given customer where the project is 
started.. I really am in no interest of lazily evaluating this 
result, because I want to return this data to the client 
(browser) immediately. And lets say I am in a prototype phase 
where i haven't yet implemented all those nasty mongo queries, I 
want to be able to filter, map and reduce the result and work 
with arrays not some sort of non evaluated lazy MapResult. In 
scala luckily I have implicit converts, so that I can just stop 
thinking about it and have it converted automatically.


Re: Virtual value types during compile-time for static type safety, static optimizations and function overloading.

2015-07-18 Thread Tamas via Digitalmars-d-learn

Sorry, the main function of positive0.d correctly looks like this:

int main() {
  return !((abs(-16) == 16)
 (abs(3) == 3)
 (square(5).absPositive == 25)
 (square(-4).absPositive == 16));
}

But this does not affect the results, the asm file sizs or the 
asm abs function bodies.





Re: Virtual value types during compile-time for static type safety, static optimizations and function overloading.

2015-07-18 Thread Tamas via Digitalmars-d-learn

On Saturday, 18 July 2015 at 13:16:26 UTC, Adam D. Ruppe wrote:

On Saturday, 18 July 2015 at 10:06:07 UTC, Tamas wrote:

Compile  execute:
$ dmd positive0.d; ./positive0; echo $?
$ ldc2 positive0.d; ./positive0; echo $?


Try adding the automatic optimize flags in all your cases. For 
dmd, `-O -inline`. Not sure about ldc but I think it is `-O` as 
well.


Thanks, indeed, after -O -inline the bodies of the two abs 
functions are the same! :)


The asm code of the templated version is still longer overall, 
but I think it's only some garbage that is not really executed. 
(e.g some with assert and unittest in the name, although I have 
none such)


Soo thank you, it's really awesome! :)


Re: Virtual value types during compile-time for static type safety, static optimizations and function overloading.

2015-07-18 Thread Tamas via Digitalmars-d-learn
I made a thorough comparison using multiple compilers and a 
summary of the findings. In short, there is a runtime overhead.


I reduced the code to cut out the imports and made two versions 
with equivalent semantic content.
positive0.d contains the hand written specializations of the abs 
function.
positive.d contains the solution with function templates / static 
type analysis.


///

/* positive0.d:

Compile  execute:
$ dmd positive0.d; ./positive0; echo $?
$ ldc2 positive0.d; ./positive0; echo $?

generate ASM source:
$ dmd positive0.d; gobjdump -d positive0.o  positive0.dmd.s
$ ldc2 positive0.d -output-s

*/

int absPositive(int n) {
  return n;
}

int abs(int n) {
  return (n=0) ? n : -n;
}

int square(int x) {
  return x * x;
}

int main() {
  return !((abs(-16) == 16)
 (abs(3) == 3)
 (square(5).abs == 25)
 (square(-4).abs == 16));
}

///

/* positive.d:

Compile  execute:
$ dmd positive.d; ./positive; echo $?
$ ldc2 positive.d; ./positive; echo $?

generate ASM source:
$ dmd positive.d; gobjdump -d positive.o  positive.dmd.s
$ ldc2 positive.d -output-s

*/
struct Positive {
  int num;
  alias num this;
}

Positive abs(T)(T n) {
  static if (is(T == Positive)) {
return n;
  } else {
return Positive((n = 0) ? n : -n);
  }
}

Positive square(int x) {
  return Positive(x * x);
}

int main() {
  return !((abs(-16) == 16)
 (abs(3) == 3)
 (square(5).abs == 25)
 (square(-4).abs == 16));
}

///

I compared the generated asms. The asm code was substantially 
longer in case of non-hand written specializations of the abs 
function.


The 'optimized' versions of the abs function were equivalent, but 
the 'non-optimzed' versions shows the runtime overhead for dmd 
and ldc2 as well, a double 'mov' commands instead of a single 
ones;


The compiled hand written code was roughly half the size for both 
compilers:


File sizes:
ldc:
2678 positive0.s
4313 positive.s

dmd:
3442 positive0.dmd.s
8701 positive.dmd.s

You can see the abs functions below, and you can spot the double 
'mov' operations:


positive.dmd.s:
0230 
_D8positive10__T3absTiZ3absFNaNbNiNfiZS8positive8Positive:

 230:   55  push   %rbp
 231:   48 8b ecmov%rsp,%rbp
 234:   48 83 ec 10 sub$0x10,%rsp
 238:   85 ff   test   %edi,%edi
 23a:	78 02	js 23e 
_D8positive10__T3absTiZ3absFNaNbNiNfiZS8positive8Positive+0xe
 23c:	eb 02	jmp240 
_D8positive10__T3absTiZ3absFNaNbNiNfiZS8positive8Positive+0x10

 23e:   f7 df   neg%edi
 240:   89 7d f0mov%edi,-0x10(%rbp)
 243:   48 89 f8mov%rdi,%rax
 246:   c9  leaveq
 247:   c3  retq

0248 
_D8positive28__T3absTS8positive8PositiveZ3absFNaNbNiNfS8positive8PositiveZS8positive8Positive:

 248:   55  push   %rbp
 249:   48 8b ecmov%rsp,%rbp
 24c:   48 83 ec 10 sub$0x10,%rsp
 250:   48 89 f8mov%rdi,%rax
 253:   c9  leaveq
 254:   c3  retq
 255:   0f 1f 00nopl   (%rax)



positive0.dmd.s:
00a0 _D9positive011absPositiveFiZi:
  a0:   55  push   %rbp
  a1:   48 8b ecmov%rsp,%rbp
  a4:   48 83 ec 10 sub$0x10,%rsp
  a8:   48 89 f8mov%rdi,%rax
  ab:   c9  leaveq
  ac:   c3  retq
  ad:   0f 1f 00nopl   (%rax)

00b0 _D9positive03absFiZi:
  b0:   55  push   %rbp
  b1:   48 8b ecmov%rsp,%rbp
  b4:   48 83 ec 10 sub$0x10,%rsp
  b8:   85 ff   test   %edi,%edi
  ba:   78 05   js c1 _D9positive03absFiZi+0x11
  bc:   48 89 f8mov%rdi,%rax
  bf:   eb 05   jmpc6 _D9positive03absFiZi+0x16
  c1:   48 89 f8mov%rdi,%rax
  c4:   f7 d8   neg%eax
  c6:   c9  leaveq
  c7:   c3  retq


ldc2:
positive.s:

__D8positive10__T3absTiZ3absFNaNbNiNfiZS8positive8Positive:
.cfi_startproc
movl%edi, -4(%rsp)
cmpl$0, -4(%rsp)
jl  LBB2_2
leaq-4(%rsp), %rax
movq%rax, -16(%rsp)
jmp LBB2_3
LBB2_2:
leaq-20(%rsp), %rax
xorl%ecx, %ecx
subl-4(%rsp), %ecx
movl%ecx, -20(%rsp)
movq%rax, -16(%rsp)
LBB2_3:
movq-16(%rsp), %rax
movl(%rax), %ecx
movl%ecx, -8(%rsp)
movl%ecx, %eax
retq
.cfi_endproc

.globl  
__D8positive28__T3absTS8positive8PositiveZ3absFNaNbNiNfS8positive8PositiveZS8positive8Positive
.weak_definition

VisualD building with GDC setting library path

2015-07-18 Thread kerdemdemir via Digitalmars-d-learn

Hi,

I am tring to build Cristi Cobzarenco's fork of Scid which has 
LAPACK,BLAS dependency.


I add all modules of Scid to my project and I am tring to build 
it within my project.


I add LibraryFiles: liblapack.a libblas.a libtmglib.a 
libgfortran.a etc.. via menu
configuration properties--Linker -- General --- LibraryFiles . 
Even though  I set C:\Qt\Tools\mingw491_32\i686-w64-mingw32\lib 
path which has libgfortran.a from Visual D Settings --Library 
Paths;


I am getting error :

gdc: error: libgfortran.a: No such file or directory.

Libraries only works if I copy and paste them to my project 
folder. And coping all libraries fortran.a , pthread.a etc... 
seems not logical to me.


I read there is a known isssue with sc.ini but it should only be 
with DMD not with GDC. How can I set library path with visualD 
while building with GDC?


Re: Virtual value types during compile-time for static type safety, static optimizations and function overloading.

2015-07-18 Thread Adam D. Ruppe via Digitalmars-d-learn

On Saturday, 18 July 2015 at 10:06:07 UTC, Tamas wrote:

Compile  execute:
$ dmd positive0.d; ./positive0; echo $?
$ ldc2 positive0.d; ./positive0; echo $?


Try adding the automatic optimize flags in all your cases. For 
dmd, `-O -inline`. Not sure about ldc but I think it is `-O` as 
well.






Sending an immutable object to a thread

2015-07-18 Thread Frank Pagliughi via Digitalmars-d-learn

Hey All,

I'm trying to send immutable class objects to a thread, and am 
having trouble if the object is one of several variables sent to 
the thread. For example, I have a Message class:


class Message { ... }

and I create an immutable object from it, and send it to another 
thread:


auto msg = immutable Message(...);

Tid tid = spawn(threadFunc);
send(tid, thisTid(), msg);

I then attempt to receive it in the threadFunc like:

receive(
(Tid cli, immutable Message msg) {
int retCode = do_something_with(msg);
send(cli, retCode);
}
);

I get compilation errors about the inability of building the 
tuple, like:
/usr/include/dmd/phobos/std/variant.d(346): Error: cannot modify 
struct *zat Tuple!(Tid, immutable(Message)) with immutable members
/usr/include/dmd/phobos/std/variant.d(657): Error: template 
instance std.variant.VariantN!32LU.VariantN.handler!(Tuple!(Tid, 
immutable(Message))) error instantiating
/usr/include/dmd/phobos/std/variant.d(580):instantiated 
from here: opAssign!(Tuple!(Tid, immutable(Message)))
/usr/include/dmd/phobos/std/concurrency.d(124):
instantiated from here: __ctor!(Tuple!(Tid, immutable(Message)))
/usr/include/dmd/phobos/std/concurrency.d(628):
instantiated from here: __ctor!(Tid, immutable(Message))
/usr/include/dmd/phobos/std/concurrency.d(618):... (1 
instantiations, -v to show) ...
/usr/include/dmd/phobos/std/concurrency.d(594):
instantiated from here: _send!(Tid, immutable(Message))
MsgTest.d(92):instantiated from here: send!(Tid, 
immutable(Message))


I tried various combinations of using Rebindable, but couldn't 
get anything to compile.


Thanks.


Does shared prevent compiler reordering?

2015-07-18 Thread rsw0x via Digitalmars-d-learn

I can't find anything on this in the spec.


String Metaprogramming

2015-07-18 Thread Clayton via Digitalmars-d-learn
Am new to D programming, am considering it since it supports 
compile-time function execution . My challenge is how can I 
re-implement the function below so that it is fully executed in 
compile-time. The function should result to tabel1 being computed 
at compile-time. There seems to be a lot of mutation happening 
here yet I have heard no mutation should take place in 
meta-programming as it subscribes to functional programming 
paradigm.




void computeAtCompileTime( ref string pattern ,ref int[char] 
tabel1){

int size = to!int(pattern.length) ;

foreach( c; ALPHABET){
tabel1[c] = size;
}

for( int i=0;isize -1 ; ++i){   //Initialise array
tabel1[pattern[i]] = size -i-1;

pragma(msg, format(reached pattern  
table1[pattern[i]]=(%s) here,
table1[pattern[i]].stringof  ~ v=~ (size 
-i-1).stringof));

}




}



Re: String Metaprogramming

2015-07-18 Thread Adam D. Ruppe via Digitalmars-d-learn

On Saturday, 18 July 2015 at 13:48:20 UTC, Clayton wrote:
There seems to be a lot of mutation happening here yet I have 
heard no mutation should take place in meta-programming as it 
subscribes to functional programming paradigm.


That's not true in D, you can just write a regular function and 
evaluate it in a compile time context, like initializing a static 
variable.


You usually don't need to write special code for compile time 
stuff in D.


Re: String Metaprogramming

2015-07-18 Thread E.S. Quinn via Digitalmars-d-learn

On Saturday, 18 July 2015 at 13:48:20 UTC, Clayton wrote:
Am new to D programming, am considering it since it supports 
compile-time function execution . My challenge is how can I 
re-implement the function below so that it is fully executed in 
compile-time. The function should result to tabel1 being 
computed at compile-time. There seems to be a lot of mutation 
happening here yet I have heard no mutation should take place 
in meta-programming as it subscribes to functional programming 
paradigm.




void computeAtCompileTime( ref string pattern ,ref int[char] 
tabel1){

int size = to!int(pattern.length) ;

foreach( c; ALPHABET){
tabel1[c] = size;
}

for( int i=0;isize -1 ; ++i){   //Initialise array
tabel1[pattern[i]] = size -i-1;

pragma(msg, format(reached pattern  
table1[pattern[i]]=(%s) here,
table1[pattern[i]].stringof  ~ v=~ (size 
-i-1).stringof));

}




}


Actually, the main things you can't do in CTFE are FPU math 
operations (much of std.math has issues unfortunately), compiler 
intrinsics, pointer/union operations, and I/O. I don't 
immediately see anything that will cause issues with CTFE in that 
function. However, sometimes the compiler isn't smart enough to 
figure out that it should be doing that, but you can force the 
compiler to try CTFE using this pattern


int ctfeFunc() {
}

void main() {
enum val = ctfeFunc();

}

enums are manifest constants, and thus must be computable at 
compile time, so this will issue an error if something in your 
function can't CTFE.


Re: Sending an immutable object to a thread

2015-07-18 Thread Frank Pagliughi via Digitalmars-d-learn
OK, I found a couple of solutions, though if anyone can tell me 
something better, I would love to hear it.


By making an alias to a rebindable reference, the receive() was 
able to create the tuple. So I renamed the class MessageType:


class MessageType { ... };

and then made a Message an immutable one of these:

alias immutable(MessageType) Message;

and finally made a VarMessage as a rebindable Message (thus, a 
mutable reference to an immutable object):


alias Rebindable!(Message) VarMessage;

[I will likely rethink these names, but anyway... ]

Now I can send a reference to an immutable object across threads. 
The receiver wants the VarMessage:


receive(
(Tid cli, VarMessage msg) {
int retVal = do_something_with(msg);
send(cli, retVal);
}
);


and a few different things work to send the object:

auto msg = new Message(...);
send(tid, thisTid(), VarMessage(msg));

or:
send(tid, thisTid(), rebindable(msg));

or:
VarMessage vmsg = new Message(...);
send(tid, thisTid(), vmsg);


A second way that seems plausible is to just make the message a 
var type using a struct and then send a copy to the thread. This 
seems viable since the vast bulk of the message is a string 
payload, and thus the size of the struct is pretty small.


Re: VisualD building with GDC setting library path

2015-07-18 Thread Rainer Schuetze via Digitalmars-d-learn



On 18.07.2015 15:07, kerdemdemir wrote:

Hi,

I am tring to build Cristi Cobzarenco's fork of Scid which has
LAPACK,BLAS dependency.

I add all modules of Scid to my project and I am tring to build it
within my project.

I add LibraryFiles: liblapack.a libblas.a libtmglib.a libgfortran.a
etc.. via menu
configuration properties--Linker -- General --- LibraryFiles . Even
though  I set C:\Qt\Tools\mingw491_32\i686-w64-mingw32\lib path which
has libgfortran.a from Visual D Settings --Library Paths;

I am getting error :

gdc: error: libgfortran.a: No such file or directory.

Libraries only works if I copy and paste them to my project folder. And
coping all libraries fortran.a , pthread.a etc... seems not logical to me.

I read there is a known isssue with sc.ini but it should only be with
DMD not with GDC. How can I set library path with visualD while building
with GDC?


GDC does not search libraries passed by filename on the command line. 
You either have to specify the full path or use option -l, i.e.


- add -llapack -lblas -ltmglib -lgfortran etc to Library Files
- set the search path either in the global or the project option 
Library Search Path


Re: String Metaprogramming

2015-07-18 Thread Clayton via Digitalmars-d-learn

On Saturday, 18 July 2015 at 16:01:25 UTC, Nicholas Wilson wrote:

On Saturday, 18 July 2015 at 13:48:20 UTC, Clayton wrote:

[...]




[...]


change function signature to
int[char] function(string) or as the char type is the index 
probably better of as
int[256] function(string). also probably no need to take 
pattern by ref as it is effectively struct{ size_t length; 
char* ptr;}. also we aren't going to modify it.


int[256] computeAtCompileTime(string pattern)
{

[...]
pattern.length is a size_t no need to change its type in 
another variable.  you are unlikely to be dealing with string 
longer than 2^32 (also signedness) but w/e
int[256] ret; // implicitly initialised to int.init 
(i.e. 0)



[...]

can just foreach over pattern
foreach(i, c; pattern)
ret[c] = pattern.length - i -1;


[...]



[...]


if you want this to be not callable at runtime then wrap the 
main body (sans variable declaration) with

if (__ctfe)
{
 ...

}
Thanks  Nicholas , I have integrated some of your advice on the 
edited code i.e. foreach and ref in pattern . Hope I fully 
understood  what you meant. Am yet to look whether I still need 
to change the signature . I have heared there are two approaches 
to this, Where does one really draw the line between CTFE and 
Template metaprogramming?


Re: String Metaprogramming

2015-07-18 Thread Tamas via Digitalmars-d-learn
Thanks  Nicholas , I have integrated some of your advice on the 
edited code i.e. foreach and ref in pattern . Hope I fully 
understood  what you meant. Am yet to look whether I still need 
to change the signature . I have heared there are two 
approaches to this, Where does one really draw the line between 
CTFE and Template metaprogramming?


Template metaprogramming == abuse the template facility in C++ 
to run a program in compile time used to be the only way to 
execute something at compile time in C++. In D you don't want to 
do that as writing such code is a matter of putting an enum or 
static at the right place. Still, you can use templates to 
achieve your goal, if that helps.




Re: String Metaprogramming

2015-07-18 Thread Nicholas Wilson via Digitalmars-d-learn

On Saturday, 18 July 2015 at 13:48:20 UTC, Clayton wrote:
Am new to D programming, am considering it since it supports 
compile-time function execution . My challenge is how can I 
re-implement the function below so that it is fully executed in 
compile-time. The function should result to tabel1 being 
computed at compile-time. There seems to be a lot of mutation 
happening here yet I have heard no mutation should take place 
in meta-programming as it subscribes to functional programming 
paradigm.



void computeAtCompileTime( ref string pattern ,ref int[char] 
tabel1){


change function signature to
int[char] function(string) or as the char type is the index 
probably better of as
int[256] function(string). also probably no need to take pattern 
by ref as it is effectively struct{ size_t length; char* ptr;}. 
also we aren't going to modify it.


int[256] computeAtCompileTime(string pattern)
{

int size = to!int(pattern.length) ;
pattern.length is a size_t no need to change its type in another 
variable.  you are unlikely to be dealing with string longer than 
2^32 (also signedness) but w/e
int[256] ret; // implicitly initialised to int.init (i.e. 
0)




foreach( c; ALPHABET){
tabel1[c] = size;
}

for( int i=0;isize -1 ; ++i){   //Initialise array
tabel1[pattern[i]] = size -i-1;

can just foreach over pattern
foreach(i, c; pattern)
ret[c] = pattern.length - i -1;

pragma(msg, format(reached pattern  
table1[pattern[i]]=(%s) here,
table1[pattern[i]].stringof  ~ v=~ (size 
-i-1).stringof));

}



}


if you want this to be not callable at runtime then wrap the main 
body (sans variable declaration) with

if (__ctfe)
{
 ...

}




Re: Does shared prevent compiler reordering?

2015-07-18 Thread Kagamin via Digitalmars-d-learn
No, it doesn't affect code generation, it's mostly for type 
checker to help write concurrent code, not to do it instead of 
you.


Re: String Metaprogramming

2015-07-18 Thread anonymous via Digitalmars-d-learn

On Saturday, 18 July 2015 at 16:18:30 UTC, Clayton wrote:
Thanks , you were right . It seems there are some key words 
though which one has to use so that the code gets executed on 
compile-time .For example I had to change the second forloop to 
a foreach loop,


`for` loops work just fine in CTFE. `foreach` is usually nicer, 
though (regardless of CTFE or not).


and then put and enum to ensure that TableFromCompiler gets 
evaluated at compiletime. Having written the code this way 
though gives rise to some other question, D supports 2 
approches to compiletime metaprogramming i.e. CTFE and 
Templates, it seems am not very sure which paradigm my code 
falls in.


Your computeAtCompileTime is a template that results in a 
function when instantiated. You're calling such a generated 
function and you're assigning the result to an enum, which makes 
it a CTFE call.


So there's both CTFE and a template in your code.

You could probably do the whole pre-computation without CTFE, 
using only templates. But (here) CTFE is more straight forward as 
you can just write normal run time D.


Re: String Metaprogramming

2015-07-18 Thread Clayton via Digitalmars-d-learn

On Saturday, 18 July 2015 at 13:56:36 UTC, Adam D. Ruppe wrote:

On Saturday, 18 July 2015 at 13:48:20 UTC, Clayton wrote:
There seems to be a lot of mutation happening here yet I have 
heard no mutation should take place in meta-programming as it 
subscribes to functional programming paradigm.


That's not true in D, you can just write a regular function and 
evaluate it in a compile time context, like initializing a 
static variable.


You usually don't need to write special code for compile time 
stuff in D.



Thanks , you were right . It seems there are some key words 
though which one has to use so that the code gets executed on 
compile-time .For example I had to change the second forloop to a 
foreach loop, and then put and enum to ensure that 
TableFromCompiler gets evaluated at compiletime. Having written 
the code this way though gives rise to some other question, D 
supports 2 approches to compiletime metaprogramming i.e. CTFE and 
Templates, it seems am not very sure which paradigm my code falls 
in.



import std.stdio;
import std.string;
import std.conv;


I[C]  computeAtCompileTime(S ,C,I)( const S  pattern ){
  I[C] table1;

  const int size = to!int(pattern.length) ;//Length of the 
pattern to be matched


  foreach( c; ALPHABET){   //Initialise array
  table1[c] = size;
}

  foreach(i; 0..size-1){
 table1[pattern[i]] = size -i-1;
  }
  return table1;
}

void main(){

 enum TableFromCompiler  = computeAtCompileTime!(const string 
,char, int)(pattern);


 writeln(TableFromCompiler);
 }


Re: Working functionally with third party libraries

2015-07-18 Thread Kagamin via Digitalmars-d-learn
On Saturday, 18 July 2015 at 09:33:37 UTC, Jarl André Hübenthal 
wrote:
I don't understand where you are going with this. I have solved 
my problem. Laziness is good for lets say take 5 out of 
infinite results.


It's also good for saving resources, you don't spend time 
managing those resources and save that time to complete the 
processing earlier.