Re: How to Declare a new pragma ?

2014-12-22 Thread FrankLike via Digitalmars-d-learn

On Monday, 22 December 2014 at 00:55:08 UTC, Mike Parker wrote:

On 12/22/2014 9:21 AM, FrankLike wrote:



Now ,x64  mainform  always  have  the  console  window,and  
the entry

is main.
could  you  do  it?
Thank  you.


Since 64-bit DMD uses the Microsoft toolchain, you need to pass 
a parameter on the command line to the MS linker. Linker 
parameters are passed with -L parameter


See [1] for information about the /SUBSYSTEM option, which is 
what you want in this case. Probably something like this:


-L/SUBSYSTEM:WINDOWS,5.02

[1] http://msdn.microsoft.com/en-us/library/fcc1zstk.aspx

Thank  you.
-L/ENTRY:mainCRTStartup
it's  ok


math.log() benchmark of first 1 billion int using std.parallelism

2014-12-22 Thread Iov Gherman via Digitalmars-d-learn

Hi everybody,

I am a java developer and used C/C++ only for some home projects 
so I never mastered native programming.


I am currently learning D and I find it fascinating. I was 
reading the documentation about std.parallelism and I wanted to 
experiment a bit with the example Find the logarithm of every 
number from 1 to 10_000_000 in parallel.


So, first, I changed the limit to 1 billion and ran it. I was 
blown away by the performance, the program ran in: 4 secs, 670 ms 
and I used a workUnitSize of 200. I have an i7 4th generation 
processor with 8 cores.


Then I was curios to try the same test in Java just to see how 
much slower will that be (at least that was what I expected). I 
used Java's ExecutorService with a pool of 8 cores and created 
5_000_000 tasks, each task was calculating log() for 200 numbers. 
The whole program ran in 3 secs, 315 ms.


Now, can anyone explain why this program ran faster in Java? I 
ran both programs multiple times and the results were always 
close to this execution times.


Can the implementation of log() function be the reason for a 
slower execution time in D?


I then decided to ran the same program in a single thread, a 
simple foreach/for loop. I tried it in C and Go also. This are 
the results:

- D: 24 secs, 32 ms.
- Java: 20 secs, 881 ms.
- C: 21 secs
- Go: 37 secs

I run Arch Linux on my PC. I compiled D programs using dmd-2.066 
and used no compile arguments (dmd prog.d).
I used Oracle's Java 8 (tried 7 and 6, seems like with Java 6 the 
performance is a bit better then 7 and 8).

To compile the C program I used: gcc 4.9.2
For Go program I used go 1.4

I really really like the built in support in D for parallel 
processing and how easy is to schedule tasks taking advantage of 
workUnitSize.


Thanks,
Iov


Re: math.log() benchmark of first 1 billion int using std.parallelism

2014-12-22 Thread bachmeier via Digitalmars-d-learn

On Monday, 22 December 2014 at 10:12:52 UTC, Iov Gherman wrote:
Now, can anyone explain why this program ran faster in Java? I 
ran both programs multiple times and the results were always 
close to this execution times.


Can the implementation of log() function be the reason for a 
slower execution time in D?


I then decided to ran the same program in a single thread, a 
simple foreach/for loop. I tried it in C and Go also. This are 
the results:

- D: 24 secs, 32 ms.
- Java: 20 secs, 881 ms.
- C: 21 secs
- Go: 37 secs

I run Arch Linux on my PC. I compiled D programs using 
dmd-2.066 and used no compile arguments (dmd prog.d).
I used Oracle's Java 8 (tried 7 and 6, seems like with Java 6 
the performance is a bit better then 7 and 8).

To compile the C program I used: gcc 4.9.2
For Go program I used go 1.4

I really really like the built in support in D for parallel 
processing and how easy is to schedule tasks taking advantage 
of workUnitSize.


Thanks,
Iov


DMD is generally going to produce the slowest code. LDC and GDC 
will normally do better.


Re: math.log() benchmark of first 1 billion int using std.parallelism

2014-12-22 Thread Daniel Kozak via Digitalmars-d-learn

 I run Arch Linux on my PC. I compiled D programs using dmd-2.066 
 and used no compile arguments (dmd prog.d)

You should try use some arguments -O -release -inline -noboundscheck
and maybe try use gdc or ldc should help with performance

can you post your code in all languages somewhere? I like to try it on
my machine :)



Re: math.log() benchmark of first 1 billion int using std.parallelism

2014-12-22 Thread Daniel Kozak via Digitalmars-d-learn
On Monday, 22 December 2014 at 10:35:52 UTC, Daniel Kozak via 
Digitalmars-d-learn wrote:


I run Arch Linux on my PC. I compiled D programs using 
dmd-2.066 and used no compile arguments (dmd prog.d)


You should try use some arguments -O -release -inline 
-noboundscheck

and maybe try use gdc or ldc should help with performance

can you post your code in all languages somewhere? I like to 
try it on

my machine :)


Btw. try use C log function, maybe it would be faster:

import core.stdc.math;


Re: ini library in OSX

2014-12-22 Thread Robert burner Schadek via Digitalmars-d-learn

On Saturday, 20 December 2014 at 08:09:06 UTC, Joel wrote:
On Monday, 13 October 2014 at 16:06:42 UTC, Robert burner 
Schadek wrote:

On Saturday, 11 October 2014 at 22:38:20 UTC, Joel wrote:
On Thursday, 11 September 2014 at 10:49:48 UTC, Robert burner 
Schadek wrote:

some self promo:

http://code.dlang.org/packages/inifiled


I would like an example?


go to the link and scroll down a page


How do you use it with current ini files ([label] key=name)?


I think I don't follow?

readINIFile(CONFIG_STRUCT, filename.ini); ?


Re: math.log() benchmark of first 1 billion int using std.parallelism

2014-12-22 Thread Russel Winder via Digitalmars-d-learn

On Mon, 2014-12-22 at 10:12 +, Iov Gherman via Digitalmars-d-learn wrote:
 […]
 - D: 24 secs, 32 ms.
 - Java: 20 secs, 881 ms.
 - C: 21 secs
 - Go: 37 secs
 
Without the source codes and the commands used to create and run, it 
is impossible to offer constructive criticism of the results. However a
priori the above does not surprise me. I'll wager ldc2 or gdc will 
beat dmd for CPU-bound code, so as others have said for benchmarking 
use ldc2 or gdc with all optimization on (-O3). If you used gc for Go 
then switch to gccgo (again with -O3) and see a huge performance 
improvement on CPU-bound code.

Java beating C and C++ is fairly normal these days due to the tricks 
you can play with JIT over AOT optimization. Once Java has proper 
support for GPGPU, it will be hard for native code languages to get 
any new converts from JVM.

Put the source up and I and others will try things out.
-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder



Re: math.log() benchmark of first 1 billion int using std.parallelism

2014-12-22 Thread aldanor via Digitalmars-d-learn

On Monday, 22 December 2014 at 11:11:07 UTC, aldanor wrote:


Just tried it out myself (E5 Xeon / Linux):

D version: 19.64 sec (avg 3 runs)

import core.stdc.math;

void main() {
double s = 0;
foreach (i; 1 .. 1_000_000_000)
s += log(i);
}

// build flags: -O -release

C version: 19.80 sec (avg 3 runs)

#include math.h

int main() {
double s = 0;
long i;
for (i = 1; i  10; i++)
s += log(i);
return 0;
}

// build flags: -O3 -lm


Replacing import core.stdc.math with import std.math in the D 
example increases the avg runtime from 19.64 to 23.87 seconds 
(~20% slower) which is consistent with OP's statement.


Re: math.log() benchmark of first 1 billion int using std.parallelism

2014-12-22 Thread aldanor via Digitalmars-d-learn

On Monday, 22 December 2014 at 10:40:45 UTC, Daniel Kozak wrote:
On Monday, 22 December 2014 at 10:35:52 UTC, Daniel Kozak via 
Digitalmars-d-learn wrote:


I run Arch Linux on my PC. I compiled D programs using 
dmd-2.066 and used no compile arguments (dmd prog.d)


You should try use some arguments -O -release -inline 
-noboundscheck

and maybe try use gdc or ldc should help with performance

can you post your code in all languages somewhere? I like to 
try it on

my machine :)


Btw. try use C log function, maybe it would be faster:

import core.stdc.math;


Just tried it out myself (E5 Xeon / Linux):

D version: 19.64 sec (avg 3 runs)

import core.stdc.math;

void main() {
double s = 0;
foreach (i; 1 .. 1_000_000_000)
s += log(i);
}

// build flags: -O -release

C version: 19.80 sec (avg 3 runs)

#include math.h

int main() {
double s = 0;
long i;
for (i = 1; i  10; i++)
s += log(i);
return 0;
}

// build flags: -O3 -lm


Re: DUB build questions

2014-12-22 Thread uri via Digitalmars-d-learn
On Saturday, 20 December 2014 at 08:36:15 UTC, Russel Winder via 
Digitalmars-d-learn wrote:


On Sat, 2014-12-20 at 05:46 +, Dicebot via 
Digitalmars-d-learn wrote:
On Saturday, 20 December 2014 at 04:15:00 UTC, Rikki 
Cattermole wrote:
  b) Can I do parallel builds with dub. CMake gives me 
  Makefiles so I can

  make -j does dub have a similar option?
 
 No


Worth noting that it is not actually a dub problem as much, it 
is simply not worth adding parallel builds because separate
compilation is much much slower with existing D front-end 
implementation and even doing it in parallel is sub-optimal

compared to dump-it-all-at-once.



From previous rounds of this sort of question (for the SCons D
tooling), the consensus of the community appeared to be that 
the only
time separate module compilation was really useful was for 
mixed D, C,
C++, Fortran systems. For pure D systems, single call of the 
compiler
is deemed far better than traditional C, C++, Fortran 
compilation
strategy. This means the whole make -j thing is not an issue, 
it
just means that Dub is only really dealing with the all D 
situation.


The corollary to this is that DMD, LDC and GDC really need to 
make use
of all parallelism they can, which I suspect is more or less 
none.


Chapel has also gone the compile all modules with a single 
compiler
call strategy as this enables global optimization from source 
to

executable.



Thanks for the info everyone.


I've used dub for just on two days now and I'm hooked!

At first I was very unsure about giving up my Makefiles, being 
the build system control freak that I am, but it really shines at 
rapid development.


As for out of source builds, it is a non-issue really. I like 
running the build outside the project tree but I can use 
gitignore and targetPath. For larger projects where we need to 
manage dependencies, generate code, run SWIG etc. I'd still use 
both SCons or CMake.



Regarding parallel builds, make -j on CMake Makefiles and dub 
build feel about the same, and that's all I care about.


I'm still not sure how dub would scale for large projects with 
100s-1000s of source modules. DMD ran out of memory in the VM 
(1Gb) at around 70 modules but CMake works due to separate 
compilation of each module ... I think. However, I didn't 
investigate due to lack of time so I wouldn't score this against 
dub. I am sure it can do it if I take the time to figure it out 
properly.


Cheers,
uri


optimization / benchmark tips a good topic for wiki ?

2014-12-22 Thread Laeeth Isharc via Digitalmars-d-learn
Replacing import core.stdc.math with import std.math in the 
D example increases the avg runtime from 19.64 to 23.87 seconds 
(~20% slower) which is consistent with OP's statement.


+ GDC/LDC vs DMD
+ nobounds, release

Do you think we should start a topic on D wiki front page for 
benchmarking/performance tips to organize peoples' experience of 
what works?


I took a quick look and couldn't see anything already.  And it 
seems to be a topic that comes up quite frequently (less on forum 
than people doing their own benchmarks and it getting picked up 
on reddit etc).


I am not so experienced in this area otherwise I would write a 
first draft myself.


Laeeth


Re: DUB build questions

2014-12-22 Thread Rikki Cattermole via Digitalmars-d-learn

On 23/12/2014 1:39 a.m., uri wrote:

On Saturday, 20 December 2014 at 08:36:15 UTC, Russel Winder via
Digitalmars-d-learn wrote:


On Sat, 2014-12-20 at 05:46 +, Dicebot via Digitalmars-d-learn wrote:

On Saturday, 20 December 2014 at 04:15:00 UTC, Rikki Cattermole wrote:
  b) Can I do parallel builds with dub. CMake gives me  
Makefiles so I can
  make -j does dub have a similar option?
  No

Worth noting that it is not actually a dub problem as much, it is
simply not worth adding parallel builds because separate
compilation is much much slower with existing D front-end
implementation and even doing it in parallel is sub-optimal
compared to dump-it-all-at-once.



From previous rounds of this sort of question (for the SCons D

tooling), the consensus of the community appeared to be that the only
time separate module compilation was really useful was for mixed D, C,
C++, Fortran systems. For pure D systems, single call of the compiler
is deemed far better than traditional C, C++, Fortran compilation
strategy. This means the whole make -j thing is not an issue, it
just means that Dub is only really dealing with the all D situation.

The corollary to this is that DMD, LDC and GDC really need to make use
of all parallelism they can, which I suspect is more or less none.

Chapel has also gone the compile all modules with a single compiler
call strategy as this enables global optimization from source to
executable.



Thanks for the info everyone.


I've used dub for just on two days now and I'm hooked!

At first I was very unsure about giving up my Makefiles, being the build
system control freak that I am, but it really shines at rapid development.

As for out of source builds, it is a non-issue really. I like running
the build outside the project tree but I can use gitignore and
targetPath. For larger projects where we need to manage dependencies,
generate code, run SWIG etc. I'd still use both SCons or CMake.


Regarding parallel builds, make -j on CMake Makefiles and dub build
feel about the same, and that's all I care about.

I'm still not sure how dub would scale for large projects with
100s-1000s of source modules. DMD ran out of memory in the VM (1Gb) at
around 70 modules but CMake works due to separate compilation of each
module ... I think. However, I didn't investigate due to lack of time so
I wouldn't score this against dub. I am sure it can do it if I take the
time to figure it out properly.

Cheers,
uri


To build anything with dmd seriously you need about 2gb of ram 
available. Yes its a lot, but its fast.

Also use subpackages. They are your friend.


Re: math.log() benchmark of first 1 billion int using std.parallelism

2014-12-22 Thread Iov Gherman via Digitalmars-d-learn

Hi Guys,

First of all, thank you all for responding so quick, it is so 
nice to see D having such an active community.


As I said in my first post, I used no other parameters to dmd 
when compiling because I don't know too much about dmd 
compilation flags. I can't wait to try the flags Daniel suggested 
with dmd (-O -release -inline -noboundscheck) and the other two 
compilers (ldc2 and gdc). Thank you guys for your suggestions.


Meanwhile, I created a git repository on github and I put there 
all my code. If you find any errors please let me know. Because I 
am keeping the results in a big array the programs take 
approximately 8Gb of RAM. If you don't have enough RAM feel free 
to decrease the size of the array. For java code you will also 
need to change 'compile-run.bsh' and use the right memory 
parameters.



Thank you all for helping,
Iov


Re: math.log() benchmark of first 1 billion int using std.parallelism

2014-12-22 Thread bachmeier via Digitalmars-d-learn

On Monday, 22 December 2014 at 17:05:19 UTC, Iov Gherman wrote:

Hi Guys,

First of all, thank you all for responding so quick, it is so 
nice to see D having such an active community.


As I said in my first post, I used no other parameters to dmd 
when compiling because I don't know too much about dmd 
compilation flags. I can't wait to try the flags Daniel 
suggested with dmd (-O -release -inline -noboundscheck) and the 
other two compilers (ldc2 and gdc). Thank you guys for your 
suggestions.


Meanwhile, I created a git repository on github and I put there 
all my code. If you find any errors please let me know. Because 
I am keeping the results in a big array the programs take 
approximately 8Gb of RAM. If you don't have enough RAM feel 
free to decrease the size of the array. For java code you will 
also need to change 'compile-run.bsh' and use the right memory 
parameters.



Thank you all for helping,
Iov


Link to your repo?


Re: math.log() benchmark of first 1 billion int using std.parallelism

2014-12-22 Thread Iov Gherman via Digitalmars-d-learn

On Monday, 22 December 2014 at 17:16:05 UTC, bachmeier wrote:

On Monday, 22 December 2014 at 17:05:19 UTC, Iov Gherman wrote:

Hi Guys,

First of all, thank you all for responding so quick, it is so 
nice to see D having such an active community.


As I said in my first post, I used no other parameters to dmd 
when compiling because I don't know too much about dmd 
compilation flags. I can't wait to try the flags Daniel 
suggested with dmd (-O -release -inline -noboundscheck) and 
the other two compilers (ldc2 and gdc). Thank you guys for 
your suggestions.


Meanwhile, I created a git repository on github and I put 
there all my code. If you find any errors please let me know. 
Because I am keeping the results in a big array the programs 
take approximately 8Gb of RAM. If you don't have enough RAM 
feel free to decrease the size of the array. For java code you 
will also need to change 'compile-run.bsh' and use the right 
memory parameters.



Thank you all for helping,
Iov


Link to your repo?


Sorry, forgot about it:
https://github.com/ghermaniov/benchmarks



Re: math.log() benchmark of first 1 billion int using std.parallelism

2014-12-22 Thread Iov Gherman via Digitalmars-d-learn

So, I did some more testing with the one processing in paralel:

--- dmd:
4 secs, 977 ms

--- dmd with flags: -O -release -inline -noboundscheck:
4 secs, 635 ms

--- ldc:
6 secs, 271 ms

--- gdc:
10 secs, 439 ms

I also pushed the new bash scripts to the git repository.


Re: math.log() benchmark of first 1 billion int using std.parallelism

2014-12-22 Thread John Colvin via Digitalmars-d-learn

On Monday, 22 December 2014 at 17:28:12 UTC, Iov Gherman wrote:

So, I did some more testing with the one processing in paralel:

--- dmd:
4 secs, 977 ms

--- dmd with flags: -O -release -inline -noboundscheck:
4 secs, 635 ms

--- ldc:
6 secs, 271 ms

--- gdc:
10 secs, 439 ms

I also pushed the new bash scripts to the git repository.


Flag suggestions:

ldc2 -O3 -release -mcpu=native -singleobj

gdc -O3 -frelease -march=native


Re: math.log() benchmark of first 1 billion int using std.parallelism

2014-12-22 Thread aldanor via Digitalmars-d-learn

On Monday, 22 December 2014 at 17:28:12 UTC, Iov Gherman wrote:

So, I did some more testing with the one processing in paralel:

--- dmd:
4 secs, 977 ms

--- dmd with flags: -O -release -inline -noboundscheck:
4 secs, 635 ms

--- ldc:
6 secs, 271 ms

--- gdc:
10 secs, 439 ms

I also pushed the new bash scripts to the git repository.


import std.math, std.stdio, std.datetime;

-- try replacing std.math with core.stdc.math.


Re: Inheritance and in-contracts

2014-12-22 Thread aldanor via Digitalmars-d-learn

https://github.com/D-Programming-Language/dmd/pull/4200


Re: math.log() benchmark of first 1 billion int using std.parallelism

2014-12-22 Thread Iov Gherman via Digitalmars-d-learn

On Monday, 22 December 2014 at 18:00:18 UTC, aldanor wrote:

On Monday, 22 December 2014 at 17:28:12 UTC, Iov Gherman wrote:

So, I did some more testing with the one processing in paralel:

--- dmd:
4 secs, 977 ms

--- dmd with flags: -O -release -inline -noboundscheck:
4 secs, 635 ms

--- ldc:
6 secs, 271 ms

--- gdc:
10 secs, 439 ms

I also pushed the new bash scripts to the git repository.


import std.math, std.stdio, std.datetime;

-- try replacing std.math with core.stdc.math.


Tried it, it is worst:
6 secs, 78 ms while the initial one was 4 secs, 977 ms and 
sometimes even better.




Re: math.log() benchmark of first 1 billion int using std.parallelism

2014-12-22 Thread Iov Gherman via Digitalmars-d-learn

On Monday, 22 December 2014 at 17:50:20 UTC, John Colvin wrote:

On Monday, 22 December 2014 at 17:28:12 UTC, Iov Gherman wrote:

So, I did some more testing with the one processing in paralel:

--- dmd:
4 secs, 977 ms

--- dmd with flags: -O -release -inline -noboundscheck:
4 secs, 635 ms

--- ldc:
6 secs, 271 ms

--- gdc:
10 secs, 439 ms

I also pushed the new bash scripts to the git repository.


Flag suggestions:

ldc2 -O3 -release -mcpu=native -singleobj

gdc -O3 -frelease -march=native


Tried it, here are the results:

--- ldc:
6 secs, 271 ms

--- ldc -O3 -release -mcpu=native -singleobj:
5 secs, 686 ms

--- gdc:
10 secs, 439 ms

--- gdc -O3 -frelease -march=native:
9 secs, 180 ms



Re: Inheritance and in-contracts

2014-12-22 Thread aldanor via Digitalmars-d-learn

On Monday, 22 December 2014 at 19:11:13 UTC, Ali Çehreli wrote:

On 12/22/2014 10:06 AM, aldanor wrote:

https://github.com/D-Programming-Language/dmd/pull/4200


Thank you! This fixes a big problem with the contracts in D.

Ali


It's not my PR but I just thought this thread would be happy to 
know :)


Re: Inheritance and in-contracts

2014-12-22 Thread Ali Çehreli via Digitalmars-d-learn

On 12/22/2014 10:06 AM, aldanor wrote:

https://github.com/D-Programming-Language/dmd/pull/4200


Thank you! This fixes a big problem with the contracts in D.

Ali



Re: Inheritance and in-contracts

2014-12-22 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On 22/12/14 19:06, aldanor via Digitalmars-d-learn wrote:

https://github.com/D-Programming-Language/dmd/pull/4200


Yes, I saw that PR with some joy -- thanks for the link! :-)


Re: math.log() benchmark of first 1 billion int using std.parallelism

2014-12-22 Thread via Digitalmars-d-learn

On Monday, 22 December 2014 at 18:23:29 UTC, Iov Gherman wrote:

On Monday, 22 December 2014 at 18:00:18 UTC, aldanor wrote:

On Monday, 22 December 2014 at 17:28:12 UTC, Iov Gherman wrote:
So, I did some more testing with the one processing in 
paralel:


--- dmd:
4 secs, 977 ms

--- dmd with flags: -O -release -inline -noboundscheck:
4 secs, 635 ms

--- ldc:
6 secs, 271 ms

--- gdc:
10 secs, 439 ms

I also pushed the new bash scripts to the git repository.


import std.math, std.stdio, std.datetime;

-- try replacing std.math with core.stdc.math.


Tried it, it is worst:
6 secs, 78 ms while the initial one was 4 secs, 977 ms and 
sometimes even better.


Strange... for me, core.stdc.math.log is about twice as fast as 
std.math.log.


Re: Inheritance and in-contracts

2014-12-22 Thread Joseph Rushton Wakeling via Digitalmars-d-learn

On 22/12/14 20:12, aldanor via Digitalmars-d-learn wrote:

On Monday, 22 December 2014 at 19:11:13 UTC, Ali Çehreli wrote:

On 12/22/2014 10:06 AM, aldanor wrote:

https://github.com/D-Programming-Language/dmd/pull/4200


Thank you! This fixes a big problem with the contracts in D.

Ali


It's not my PR but I just thought this thread would be happy to know :)


Actually, the author is a friend of mine, and an all-round wonderful guy. :-)




Re: math.log() benchmark of first 1 billion int using std.parallelism

2014-12-22 Thread John Colvin via Digitalmars-d-learn

On Monday, 22 December 2014 at 18:27:48 UTC, Iov Gherman wrote:

On Monday, 22 December 2014 at 17:50:20 UTC, John Colvin wrote:

On Monday, 22 December 2014 at 17:28:12 UTC, Iov Gherman wrote:
So, I did some more testing with the one processing in 
paralel:


--- dmd:
4 secs, 977 ms

--- dmd with flags: -O -release -inline -noboundscheck:
4 secs, 635 ms

--- ldc:
6 secs, 271 ms

--- gdc:
10 secs, 439 ms

I also pushed the new bash scripts to the git repository.


Flag suggestions:

ldc2 -O3 -release -mcpu=native -singleobj

gdc -O3 -frelease -march=native


Tried it, here are the results:

--- ldc:
6 secs, 271 ms

--- ldc -O3 -release -mcpu=native -singleobj:
5 secs, 686 ms

--- gdc:
10 secs, 439 ms

--- gdc -O3 -frelease -march=native:
9 secs, 180 ms


That's very different to my results.

I see no important difference between ldc and dmd when using 
std.math, but when using core.stdc.math ldc halves its time where 
dmd only manages to get to ~80%


How to get the processid by exe's name in D?

2014-12-22 Thread FrankLike via Digitalmars-d-learn
Now,if you  want  to  know  whether  a  exe  is  in  processes  
,you  must use the  win  API.Do  you  have any  other  idea?


Re: How to get the processid by exe's name in D?

2014-12-22 Thread Adam D. Ruppe via Digitalmars-d-learn
The windows api is how I'd do it - look up how to do it in C, 
then do the same thing in D.


Re: math.log() benchmark of first 1 billion int using std.parallelism

2014-12-22 Thread Daniel Kozak via Digitalmars-d-learn


That's very different to my results.

I see no important difference between ldc and dmd when using 
std.math, but when using core.stdc.math ldc halves its time 
where dmd only manages to get to ~80%


What CPU do you have? On my Intel Core i3 I have similar 
experience as Iov Gherman, but on my Amd FX4200 I have same 
results as you. Seems std.math.log is not good for my AMD CPU :)




Re: ini library in OSX

2014-12-22 Thread Joel via Digitalmars-d-learn
On Monday, 22 December 2014 at 11:04:10 UTC, Robert burner 
Schadek wrote:

On Saturday, 20 December 2014 at 08:09:06 UTC, Joel wrote:
On Monday, 13 October 2014 at 16:06:42 UTC, Robert burner 
Schadek wrote:

On Saturday, 11 October 2014 at 22:38:20 UTC, Joel wrote:
On Thursday, 11 September 2014 at 10:49:48 UTC, Robert 
burner Schadek wrote:

some self promo:

http://code.dlang.org/packages/inifiled


I would like an example?


go to the link and scroll down a page


How do you use it with current ini files ([label] key=name)?


I think I don't follow?

readINIFile(CONFIG_STRUCT, filename.ini); ?


I have a ini file that has price goods date etc for each item. I 
couldn't see how you could have (see below). It's got more than 
section, I don't know how to do that with your library.


In this form:

[section0]
day=1
month=8
year=2013
item=Fish'n'Chips
cost=3
shop=Take aways
comment=Don't know the date
[section1]
day=1
month=8
year=2013
item=almond individually wrapped chocolates (for Cecily), 
through-ties

cost=7
shop=Putaruru - Dairy (near Hotel 79)
comment=Don't know the date or time.

Also, what is CONFIG_STRUCT? - is that for using 'struct' instead 
of 'class'?


[Issue 13887] New: Add checksums and other security artifacts to tools downloads

2014-12-22 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=13887

  Issue ID: 13887
   Summary: Add checksums and other security artifacts to tools
downloads
   Product: D
   Version: D2
  Hardware: x86
OS: Mac OS X
Status: NEW
  Severity: enhancement
  Priority: P1
 Component: websites
  Assignee: nob...@puremagic.com
  Reporter: and...@erdani.com

Per http://forum.dlang.org/thread/wbgkrygtmtboqgipm...@forum.dlang.org

--


[Issue 13887] Add checksums and other security artifacts to tools downloads

2014-12-22 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=13887

Vladimir Panteleev thecybersha...@gmail.com changed:

   What|Removed |Added

 CC||thecybersha...@gmail.com

--- Comment #1 from Vladimir Panteleev thecybersha...@gmail.com ---
FWIW: Certum provides free code-signing certificates to open-source developers.

I wrote some info on my blog:

http://blog.thecybershadow.net/2013/08/22/code-signing/

However, since these are given to individuals, I think DigitalMars should
probably just buy a code signing certificate.

--


[Issue 13888] New: VisualD project settings use the same property grid as C/C++ projects?

2014-12-22 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=13888

  Issue ID: 13888
   Summary: VisualD project settings use the same property grid as
C/C++ projects?
   Product: D
   Version: unspecified
  Hardware: x86_64
OS: Windows
Status: NEW
  Severity: enhancement
  Priority: P1
 Component: VisualD
  Assignee: nob...@puremagic.com
  Reporter: turkey...@gmail.com

Is it possible for VisualD to use the same property grid that C/C++ projects
use for the project settings?
There are some subtle differences in behaviour, and I think it would go a long
way to making the experience feel a lot more 'real'.

The distinction from the MSVC projects gives an impression to new users that
the D ecosystem lives 'outside'/separately, and doesn't really integrate well.

This is all about user impressions, and familiarity + usability.

--


[Issue 13889] New: mscoff32 libs not available

2014-12-22 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=13889

  Issue ID: 13889
   Summary: mscoff32 libs not available
   Product: D
   Version: D2
  Hardware: x86
OS: Windows
Status: NEW
  Severity: enhancement
  Priority: P1
 Component: DMD
  Assignee: nob...@puremagic.com
  Reporter: turkey...@gmail.com

The current beta supports mscoff output for 32bit code, but there are no coff32
libs present (druntime, phobos, curl, etc).

Can they be built and bundled with DMD in the future, so that we can use
mscoff32 out of the box?

--


Re: HDF5 bindings for D

2014-12-22 Thread Laeeth Isharc via Digitalmars-d-announce
On Monday, 22 December 2014 at 05:04:10 UTC, Rikki Cattermole 
wrote:
You seem to be missing your dub file. Would be rather hard to 
get it onto dub repository without it ;)
Oh and keep the bindings separate from wrappers in terms of 
subpackages.


Thanks - added now.

Will work on separating out bindings when have a bit more time, 
but it should be easy enough.


Swiss Ephemeris / Nelder-Mead simplex

2014-12-22 Thread Laeeth Isharc via Digitalmars-d-announce
Last one for a while, I think.  I wish you all a very peaceful 
Christmas and New Year, and let's hope 2015 brings some more 
positive energy to the world.


Links here:
https://github.com/Laeeth/d_simplex
https://github.com/Laeeth/d_swisseph



1. D bindings/wrappers for the swiss ephemeris

http://www.astro.com/swisseph/swephinfo_e.htm
The SWISS EPHEMERIS is the high precision ephemeris developed by 
Astrodienst, largely based upon the DExxx ephemerides from NASA's 
JPL . The original release in 1997 was based on the DE405/406 
ephemeris. Since release 2.00 in February 2014, it is based on 
the DE431 ephemeris released by JPL in September 2013.


NB - Swiss Ephemeris is not free for commercial use.

2. D port of simple Nelder-Mead simplex minimisation (written by 
Michael F. Hutt in original C version) here.  With constraints.  
From Wiki:


https://en.wikipedia.org/wiki/Nelder-Mead_method
The Nelder–Mead method or downhill simplex method or amoeba 
method is a commonly used nonlinear optimization technique, which 
is a well-defined numerical method for problems for which 
derivatives may not be known. However, the Nelder–Mead technique 
is a heuristic search method that can converge to non-stationary 
points[1] on problems that can be solved by alternative methods.



Links here:
https://github.com/Laeeth/d_simplex
https://github.com/Laeeth/d_swisseph


dfl2 can work for 64 bit,and base on D2.067b1

2014-12-22 Thread FrankLike via Digitalmars-d-announce

Now,you can use dfl2 to get the 64 bit winForm.

Frank.


dco can work for 64 bit,and base on D2.067b1:code.dlang.org

2014-12-22 Thread FrankLike via Digitalmars-d-announce
dco is a build tool,and very easy to use,it can build dfl64.lib 
,dgui.lib or other your projects,it can auto copy dfl.lib to the 
dmd2\windows\lib or lib64.
After you work on the dfl2,use the build.bat,you will feel it's 
very easy to use.


dco:
https://github.com/FrankLIKE/dco/
dfl2:
https://github.com/FrankLIKE/dfl2/

Frank


Re: dfl2 can work for 64 bit,and base on D2.067b1

2014-12-22 Thread FrankLike via Digitalmars-d-announce

On Monday, 22 December 2014 at 11:33:14 UTC, FrankLike wrote:

Now,you can use dfl2 to get the 64 bit winForm.

Frank.


dfl2:
https://github.com/FrankLIKE/dfl2/



Re: Facebook is using D in production starting today

2014-12-22 Thread FrankLike via Digitalmars-d-announce
On Thursday, 18 December 2014 at 09:18:06 UTC, Rune Christensen 
wrote:
On Monday, 18 November 2013 at 17:23:25 UTC, Andrei 
Alexandrescu wrote:

On 11/18/13 6:03 AM, Gary Willoughby wrote:
On Friday, 11 October 2013 at 00:36:12 UTC, Andrei 
Alexandrescu wrote:
In all likelihood we'll follow up with a blog post 
describing the

process.


Any more news on this Andrei?


Not yet. I'm the bottleneck here - must find the time to work 
on that.


Andrei


Are you still using D in production? Are you using it more than 
before?


Regards,
Rune


D is useful ,now I use the 
dfl2(https://github.com/FrankLIKE/dfl2/)and the build tool dco 
(https://github.com/FrankLIKE/dco/),very good.

Frank


Re: Facebook is using D in production starting today

2014-12-22 Thread Stefan Koch via Digitalmars-d-announce

On Monday, 22 December 2014 at 12:12:19 UTC, FrankLike wrote:
D is useful ,now I use the 
dfl2(https://github.com/FrankLIKE/dfl2/)and the build tool dco 
(https://github.com/FrankLIKE/dco/),very good.

Frank


Do you work for Facebook ?


Re: dco can work for 64 bit,and base on D2.067b1:code.dlang.org

2014-12-22 Thread uri via Digitalmars-d-announce

On Monday, 22 December 2014 at 11:41:16 UTC, FrankLike wrote:
dco is a build tool,and very easy to use,it can build dfl64.lib 
,dgui.lib or other your projects,it can auto copy dfl.lib to 
the dmd2\windows\lib or lib64.
After you work on the dfl2,use the build.bat,you will feel it's 
very easy to use.


dco:
https://github.com/FrankLIKE/dco/
dfl2:
https://github.com/FrankLIKE/dfl2/

Frank


Thanks, I'm in the process of looking at CMake/SCons alternatives 
right at the moment and will have a look at dco.


I'm trying dub at the moment and it's working perfectly fine so 
far as a build tool. The alternatives, such as CMake and SCons, 
are proven technologies with D support that have also worked for 
me in the past.


Can I ask what the existing tools were missing and why did you 
felt it necessary to reinvented your own build tool?


Thanks,
uri







Re: dco can work for 64 bit,and base on D2.067b1:code.dlang.org

2014-12-22 Thread Dejan Lekic via Digitalmars-d-announce

On Monday, 22 December 2014 at 12:57:01 UTC, uri wrote:

On Monday, 22 December 2014 at 11:41:16 UTC, FrankLike wrote:
dco is a build tool,and very easy to use,it can build 
dfl64.lib ,dgui.lib or other your projects,it can auto copy 
dfl.lib to the dmd2\windows\lib or lib64.
After you work on the dfl2,use the build.bat,you will feel 
it's very easy to use.


dco:
https://github.com/FrankLIKE/dco/
dfl2:
https://github.com/FrankLIKE/dfl2/

Frank


Thanks, I'm in the process of looking at CMake/SCons 
alternatives right at the moment and will have a look at dco.


I'm trying dub at the moment and it's working perfectly fine so 
far as a build tool. The alternatives, such as CMake and SCons, 
are proven technologies with D support that have also worked 
for me in the past.


Can I ask what the existing tools were missing and why did you 
felt it necessary to reinvented your own build tool?


Thanks,
uri


Then try waf as well. :) https://code.google.com/p/waf/


Re: HDF5 bindings for D

2014-12-22 Thread John Colvin via Digitalmars-d-announce

On Monday, 22 December 2014 at 04:51:44 UTC, Laeeth Isharc wrote:

https://github.com/Laeeth/d_hdf5

HDF5 is a very valuable tool for those working with large data 
sets.


From HDF5group.org

HDF5 is a unique technology suite that makes possible the 
management of extremely large and complex data collections. The 
HDF5 technology suite includes:


* A versatile data model that can represent very complex data 
objects and a wide variety of metadata.
* A completely portable file format with no limit on the number 
or size of data objects in the collection.
* A software library that runs on a range of computational 
platforms, from laptops to massively parallel systems, and 
implements a high-level API with C, C++, Fortran 90, and Java 
interfaces.
* A rich set of integrated performance features that allow for 
access time and storage space optimizations.
* Tools and applications for managing, manipulating, viewing, 
and analyzing the data in the collection.
* The HDF5 data model, file format, API, library, and tools are 
open and distributed without charge.


From h5py.org:
[HDF5] lets you store huge amounts of numerical data, and 
easily manipulate that data from NumPy. For example, you can 
slice into multi-terabyte datasets stored on disk, as if they 
were real NumPy arrays. Thousands of datasets can be stored in 
a single file, categorized and tagged however you want.


H5py uses straightforward NumPy and Python metaphors, like 
dictionary and NumPy array syntax. For example, you can iterate 
over datasets in a file, or check out the .shape or .dtype 
attributes of datasets. You don't need to know anything special 
about HDF5 to get started.


In addition to the easy-to-use high level interface, h5py rests 
on a object-oriented Cython wrapping of the HDF5 C API. Almost 
anything you can do from C in HDF5, you can do from h5py.


Best of all, the files you create are in a widely-used standard 
binary format, which you can exchange with other people, 
including those who use programs like IDL and MATLAB.


===
As far as I know there has not really been a complete set of 
HDF5 bindings for D yet.


Bindings should have three levels:
1. pure C API declaration
2. 'nice' D wrapper around C API (eg that knows about strings, 
not just char*)

3. idiomatic D interface that uses CTFE/templates

I borrowed Stefan Frijter's work on (1) above to get started.  
I cannot keep track of things when split over too many source 
files, so I put everything in one file - hdf5.d.


Have implemented a basic version of 2.  Includes throwOnError 
rather than forcing checking status C style, but the exception 
code is not very good/complete (time + lack of experience with 
D exceptions).


(3) will have to come later.

It's more or less complete, and the examples I have translated 
so far mostly work.  But still a work in progress.  Any 
help/suggestions appreciated.  [I am doing this for myself, so 
project is not as pretty as I would like in an ideal world].



https://github.com/Laeeth/d_hdf5


Also relevant to some: http://code.dlang.org/packages/netcdf


Re: dco can work for 64 bit,and base on D2.067b1:code.dlang.org

2014-12-22 Thread Russel Winder via Digitalmars-d-announce

On Mon, 2014-12-22 at 12:57 +, uri via Digitalmars-d-announce wrote:
 […]
 
 Thanks, I'm in the process of looking at CMake/SCons alternatives 
 right at the moment and will have a look at dco.

May I ask why SCons is insufficient for you?

 I'm trying dub at the moment and it's working perfectly fine so far 
 as a build tool. The alternatives, such as CMake and SCons, are 
 proven technologies with D support that have also worked for me in 
 the past.
 
 Can I ask what the existing tools were missing and why did you felt 
 it necessary to reinvented your own build tool?

The makers of Dub chose to invent a new build tool despite Make, CMake 
and SCons. Although it is clear Dub is the current de facto standard 
build tool for pure D codes, there is nothing wrong with alternate 
experiments. I hope we can have an open technical discussion of these 
points, it can only help all the build systems with D support.
-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder



GCCJIT Bindings for D

2014-12-22 Thread Iain Buclaw via Digitalmars-d-announce

Hi,

Apparently I've never announced this here, so here we go.

I have written, and started maintaining D bindings for the GCCJIT 
library, available on github at this location:


https://github.com/ibuclaw/gccjitd


What is GCCJIT?
---
GCCJIT is a new front-end for gcc that aims to provide an 
embeddable shared library with an API for adding compilation to 
existing programs using GCC as the backend.


This shared library can then be dynamically-linked into bytecode 
interpreters and other such programs that want to generate 
machine code on the fly at run-time.


The library is of alpha quality and the API is subject to change. 
 It is however in development for the next GCC release (5.0).



How can I use it?
---
See the following link for a hello world program.

https://github.com/ibuclaw/gccjitd/blob/master/tests/dapi.d

I am currently in the process of Ddoc-ifying the documentation 
that comes with the C API binding and moving that across to the D 
API.  Improvements shall come over the next months - though any 
assistance in making the Ddocs prettier are welcome contributions.



Regards
Iain.


Re: Swiss Ephemeris / Nelder-Mead simplex

2014-12-22 Thread bachmeier via Digitalmars-d-announce

On Monday, 22 December 2014 at 08:43:56 UTC, Laeeth Isharc wrote:
Last one for a while, I think.  I wish you all a very peaceful 
Christmas and New Year, and let's hope 2015 brings some more 
positive energy to the world.


Links here:
https://github.com/Laeeth/d_simplex
https://github.com/Laeeth/d_swisseph



1. D bindings/wrappers for the swiss ephemeris

http://www.astro.com/swisseph/swephinfo_e.htm
The SWISS EPHEMERIS is the high precision ephemeris developed 
by Astrodienst, largely based upon the DExxx ephemerides from 
NASA's JPL . The original release in 1997 was based on the 
DE405/406 ephemeris. Since release 2.00 in February 2014, it is 
based on the DE431 ephemeris released by JPL in September 2013.


NB - Swiss Ephemeris is not free for commercial use.

2. D port of simple Nelder-Mead simplex minimisation (written 
by Michael F. Hutt in original C version) here.  With 
constraints.  From Wiki:


https://en.wikipedia.org/wiki/Nelder-Mead_method
The Nelder–Mead method or downhill simplex method or amoeba 
method is a commonly used nonlinear optimization technique, 
which is a well-defined numerical method for problems for which 
derivatives may not be known. However, the Nelder–Mead 
technique is a heuristic search method that can converge to 
non-stationary points[1] on problems that can be solved by 
alternative methods.



Links here:
https://github.com/Laeeth/d_simplex
https://github.com/Laeeth/d_swisseph


It's been ages since I read the paper, but there is a parallel 
version of Nelder-Mead that is supposed to give very large 
performance improvements, even when used on a single processor:


http://www.cs.ucsb.edu/~kyleklein/publications/neldermead.pdf

It is not difficult to implement. I may look into modifying your 
code to implement it when I get some time.


Re: Swiss Ephemeris / Nelder-Mead simplex

2014-12-22 Thread via Digitalmars-d-announce

On Monday, 22 December 2014 at 20:46:23 UTC, bachmeier wrote:
It's been ages since I read the paper, but there is a parallel 
version of Nelder-Mead that is supposed to give very large 
performance improvements, even when used on a single processor:


http://www.cs.ucsb.edu/~kyleklein/publications/neldermead.pdf

It is not difficult to implement. I may look into modifying 
your code to implement it when I get some time.


It will certainly also be advantageous to pass the functions as 
aliases, so that they can get inlined.


Re: Swiss Ephemeris / Nelder-Mead simplex

2014-12-22 Thread Laeeth Isharc via Digitalmars-d-announce

On Monday, 22 December 2014 at 21:39:08 UTC, Marc Schütz wrote:

On Monday, 22 December 2014 at 20:46:23 UTC, bachmeier wrote:
It's been ages since I read the paper, but there is a parallel 
version of Nelder-Mead that is supposed to give very large 
performance improvements, even when used on a single processor:


http://www.cs.ucsb.edu/~kyleklein/publications/neldermead.pdf

It is not difficult to implement. I may look into modifying 
your code to implement it when I get some time.


It will certainly also be advantageous to pass the functions as 
aliases, so that they can get inlined.


Thanks, Marc.  I appreciate the pointer, and if you do have time 
to look at the code.  I confess that it can't really be called my 
own implementation as I simply ported it to D.  There is some 
more clever stuff within quantlib (c++ project), but I quite 
liked the idea of starting with this one as it is simple, and 
speed is not yet vital at this stage.



Laeeth.



Calypso: Direct and full interfacing to C++

2014-12-22 Thread Elie Morisse via Digitalmars-d-announce

Hi everyone,

I have the pleasure to announce to you all the existence of a 
modified LDC able to interface directly to C++ libraries, wiping 
out the need to write bindings:


 https://github.com/Syniurge/Calypso

It's at a prototype stage, but its C++ support is pretty wide 
already:


 • Global variables
 • Functions
 • Structs
 • Unions (symbol only)
 • Enums
 • Typedefs
 • C++ class creation with the correct calls to ctors 
(destruction is disabled for now)

 • Virtual function calls
 • Static casts between C++ base and derived classes (incl. 
multiple inheritance offsets)
 • Mapping template implicit and explicit specializations already 
in the PCH to DMD ones, no new specialization on the D side yet
 • D classes inheriting from C++ ones, including the correct 
vtable generation for the C++ part of the class


So what is this sorcery? Let's remind ourselves that this isn't 
supposed to be feasible:


Being 100% compatible with C++ means more or less adding a fully 
functional C++ compiler front end to D. Anecdotal evidence 
suggests that writing such is a minimum of a 10 man-year 
project, essentially making a D compiler with such capability 
unimplementable.

http://dlang.org/cpp_interface.html

Well.. it is :D
Calypso introduces the modmap keyword, as in:

  modmap (C++) cppheader.h;

to generate with the help of Clang libraries a virtual tree of 
C++ modules. Then after making Clang generate a PCH for all the 
headers, the PCH is loaded and classes, structs, enums are placed 
inside modules named after them, while global variables and 
functions are in a special module named _. For example:


  import (C++) Namespace.SomeClass;  // imports 
Namespace::SomeClass
  import (C++) Namespace._;  // imports all the global variables 
and functions in Namespace
  import (C++) _ : myCfunc, myGlobalVar;  // importing the global 
namespace = bad idea, but selective imports work


Being a prototype, I didn't really pay attention to code 
conventions or elegance and instead focused on getting things 
working.
And being tied to LDC and Clang (I have no idea how feasible a 
GCC version would be), it's going to stay like this for some time 
until I get feedback from the contributors on how this all should 
really be implemented,. For example Calypso introduces language 
plugins, to minimize the amount of code specific to C++ and to 
make support of foreign languages cleaner and less intrusive, 
although it of course needs numerous hooks here and there in DMD 
and LDC.


Calypso is still WIP, but it's in pretty good shape and already 
works in a lot of test cases (see tests/calypso/), and is almost 
ready to use for C++ libraries at least. Since C libraries are in 
the global namespace, it's not a convenient replacement yet for 
bindings until I implement the Clang module map format. More info 
this blog post detailing some of the history behind Calypso:


http://syniurge.blogspot.com/2013/08/calypso-to-mars-first-contact.html

So.. Merry Christmas dear D community? :)


My take on the current talks of feature freezing D: the 
strength of D is its sophistication. The core reason why D fails 
to attract more users isn't the frequent compiler bugs or 
regressions, but the huge amount of time needed to get something 
done because neither equivalent nor bindings exist for most big 
and widespread C++ libraries like Qt. All these talks about 
making D a more minimalist language won't solve much and will 
only result in holding back D, which has the potential to become 
a superset of all the good in other system languages, as well as 
bringing its own powerful unique features such as metaprogramming 
done right.
By removing the main reason why D wasn't a practical choice, this 
will hopefully unlock the situation and make D gain momentum as 
well as attract more contributors to the compilers to fix bugs 
and regressions before releases.


Re: dco can work for 64 bit,and base on D2.067b1:code.dlang.org

2014-12-22 Thread uri via Digitalmars-d-announce
On Monday, 22 December 2014 at 18:33:42 UTC, Russel Winder via 
Digitalmars-d-announce wrote:


On Mon, 2014-12-22 at 12:57 +, uri via 
Digitalmars-d-announce wrote:

[…]

Thanks, I'm in the process of looking at CMake/SCons 
alternatives right at the moment and will have a look at dco.


May I ask why SCons is insufficient for you?


It isn't. We review our build system every 12 months over Xmas 
quite period and tidy it all up. Part of the process is trying 
alternatives.


We use Python +SCons to drive our builds and CMake to generate 
native makefiles. We find this approach scales better in terms of 
speed and system load.


It is a pity CMake invented it's own noisy script though. I also 
find with CMake it can be extremely difficult to establish 
context when looking at the code. This is why we're slowly 
migrating to SCons.




I'm trying dub at the moment and it's working perfectly fine 
so far as a build tool. The alternatives, such as CMake and 
SCons, are proven technologies with D support that have also 
worked for me in the past.


Can I ask what the existing tools were missing and why did you 
felt it necessary to reinvented your own build tool?


The makers of Dub chose to invent a new build tool despite 
Make, CMake
and SCons. Although it is clear Dub is the current de facto 
standard
build tool for pure D codes, there is nothing wrong with 
alternate
experiments. I hope we can have an open technical discussion of 
these

points, it can only help all the build systems with D support.


I really like DUB for quick development, but in it's current form 
I don't see it scaling to larger builds. IMO the use of JSON puts 
it on par with the Java build tool Ant. JSON and XML (Ant) are 
data formats, not scripting languages and In my experience a 
large build system requires logic and flow control. I've had to 
do this before in Ant XML and it isn't pretty, nor is it flexible.


I use SCons for personal D projects that I think will be long 
lived and DUB for quick experiments. I was using CMake for 
personal work but that script is too ugly :)


Cheers,
uri








Re: Calypso: Direct and full interfacing to C++

2014-12-22 Thread Rikki Cattermole via Digitalmars-d-announce

On 23/12/2014 12:14 p.m., Elie Morisse wrote:

Hi everyone,

I have the pleasure to announce to you all the existence of a modified
LDC able to interface directly to C++ libraries, wiping out the need to
write bindings:

  https://github.com/Syniurge/Calypso

It's at a prototype stage, but its C++ support is pretty wide already:

  • Global variables
  • Functions
  • Structs
  • Unions (symbol only)
  • Enums
  • Typedefs
  • C++ class creation with the correct calls to ctors (destruction is
disabled for now)
  • Virtual function calls
  • Static casts between C++ base and derived classes (incl. multiple
inheritance offsets)
  • Mapping template implicit and explicit specializations already in
the PCH to DMD ones, no new specialization on the D side yet
  • D classes inheriting from C++ ones, including the correct vtable
generation for the C++ part of the class

So what is this sorcery? Let's remind ourselves that this isn't supposed
to be feasible:


Being 100% compatible with C++ means more or less adding a fully
functional C++ compiler front end to D. Anecdotal evidence suggests
that writing such is a minimum of a 10 man-year project, essentially
making a D compiler with such capability unimplementable.

http://dlang.org/cpp_interface.html

Well.. it is :D
Calypso introduces the modmap keyword, as in:

   modmap (C++) cppheader.h;


That really should be a pragma.
pragma(modmap, C++, cppheader.h);
Since pragma's are the way to instruct the compiler to do something.


to generate with the help of Clang libraries a virtual tree of C++
modules. Then after making Clang generate a PCH for all the headers, the
PCH is loaded and classes, structs, enums are placed inside modules
named after them, while global variables and functions are in a special
module named _. For example:

   import (C++) Namespace.SomeClass;  // imports Namespace::SomeClass
   import (C++) Namespace._;  // imports all the global variables and
functions in Namespace
   import (C++) _ : myCfunc, myGlobalVar;  // importing the global
namespace = bad idea, but selective imports work

Being a prototype, I didn't really pay attention to code conventions or
elegance and instead focused on getting things working.
And being tied to LDC and Clang (I have no idea how feasible a GCC
version would be), it's going to stay like this for some time until I
get feedback from the contributors on how this all should really be
implemented,. For example Calypso introduces language plugins, to
minimize the amount of code specific to C++ and to make support of
foreign languages cleaner and less intrusive, although it of course
needs numerous hooks here and there in DMD and LDC.

Calypso is still WIP, but it's in pretty good shape and already works in
a lot of test cases (see tests/calypso/), and is almost ready to use for
C++ libraries at least. Since C libraries are in the global namespace,
it's not a convenient replacement yet for bindings until I implement the
Clang module map format. More info this blog post detailing some of the
history behind Calypso:

http://syniurge.blogspot.com/2013/08/calypso-to-mars-first-contact.html

So.. Merry Christmas dear D community? :)


My take on the current talks of feature freezing D: the strength of D
is its sophistication. The core reason why D fails to attract more users
isn't the frequent compiler bugs or regressions, but the huge amount of
time needed to get something done because neither equivalent nor
bindings exist for most big and widespread C++ libraries like Qt. All
these talks about making D a more minimalist language won't solve much
and will only result in holding back D, which has the potential to
become a superset of all the good in other system languages, as well as
bringing its own powerful unique features such as metaprogramming done
right.
By removing the main reason why D wasn't a practical choice, this will
hopefully unlock the situation and make D gain momentum as well as
attract more contributors to the compilers to fix bugs and regressions
before releases.


Will you be upstreaming this? Or maintaining this completely yourself?


Re: dco can work for 64 bit,and base on D2.067b1:code.dlang.org

2014-12-22 Thread FrankLike via Digitalmars-d-announce

On Monday, 22 December 2014 at 12:57:01 UTC, uri wrote:

On Monday, 22 December 2014 at 11:41:16 UTC, FrankLike wrote:
dco is a build tool,and very easy to use,it can build 
dfl64.lib ,dgui.lib or other your projects,it can auto copy 
dfl.lib to the dmd2\windows\lib or lib64.
After you work on the dfl2,use the build.bat,you will feel 
it's very easy to use.


dco:
https://github.com/FrankLIKE/dco/
dfl2:
https://github.com/FrankLIKE/dfl2/

Frank


Thanks, I'm in the process of looking at CMake/SCons 
alternatives right at the moment and will have a look at dco.


I'm trying dub at the moment and it's working perfectly fine so 
far as a build tool. The alternatives, such as CMake and SCons, 
are proven technologies with D support that have also worked 
for me in the past.


Can I ask what the existing tools were missing and why did you 
felt it necessary to reinvented your own build tool?


Thanks,
uri
dco  lets  building  project  is  easy, auto  add itself  to bin  
folder, may auto  add  d  files,auto  ignore  some  d  files  in  
ignoreFiles  folder,auto  -L  libs  in  dco.ini  ,and  in next  
version  ,you can set  your offen used libs  in  dco.ini.


Re: Calypso: Direct and full interfacing to C++

2014-12-22 Thread Elie Morisse via Digitalmars-d-announce
On Tuesday, 23 December 2014 at 00:01:30 UTC, Rikki Cattermole 
wrote:
Will you be upstreaming this? Or maintaining this completely 
yourself?


The ultimate goal is upstream, but first I need to agree with the 
main DMD and LDC contributors about how this should really be 
done. I.e atm the Calypso code coexists with the vanilla C++ 
support which has a different coding philosophy and is more 
intertwined with the rest of the code.


So I expect that I'll have to maintain it myself quite some time 
before this happens. And of course I'll make Calypso catch up 
with upstream LDC frequently.


Re: Calypso: Direct and full interfacing to C++

2014-12-22 Thread Dicebot via Digitalmars-d-announce

On Monday, 22 December 2014 at 23:14:44 UTC, Elie Morisse wrote:
Being 100% compatible with C++ means more or less adding a 
fully functional C++ compiler front end to D. Anecdotal 
evidence suggests that writing such is a minimum of a 10 
man-year project, essentially making a D compiler with such 
capability unimplementable.

http://dlang.org/cpp_interface.html

Well.. it is :D


Well, technically speaking you DO include fully functional C++ 
compiler front end with much more than 10 man-years of time 
invested - it just happens that it is already available and 
called clang :)


Project itself is very cool but I am in doubts about possibility 
of merging this upstream. Doing so would make full D 
implementation effectively impossible without some C++ compiler 
already available as a library on same platform - quite a 
restriction!


I think it is better suited as LDC extension and would discourage 
its usage in public open-source projects sticking to old way of C 
binding generation instead. For more in-house projects it looks 
like an absolute killer and exactly the thing Facebook guys 
wanted :)


Re: DIP69 - Implement scope for escape proof references

2014-12-22 Thread Dicebot via Digitalmars-d

On Monday, 22 December 2014 at 03:07:53 UTC, Walter Bright wrote:

On 12/21/2014 2:06 AM, Dicebot wrote:
No, it is exactly the other way around. The very point of what 
I am saying is
that you DOESN'T CARE about ownership as long as worst case 
scenario is
assumed.   I have zero idea why you identify it is conflating 
as ownership when

it is explicitly designed to be distinct.


The point of transitive scoping would be if the root owned the 
data reachable through the root.


Quoting myself:

For me scopeness is a property of view, not object itself - 
this also makes ownership method of actual data irrelevant. Only

difference between GC owned data and stack allocated one is that
former can have scoped view optionally but for the latter
compiler must force it as the only available.


It doesn't matter of root owns the data. We _assume_ that as 
worst case scenario and allowed actions form a strict subset of 
allowed actions for any other ownership situation. Such `scope` 
for stack/GC is same as `const` for mutable/immutable  - common 
denominator.


Point of transitive scope is to make easy to expose complex 
custom data structures without breaking memory safety.


Re: What is the D plan's to become a used language?

2014-12-22 Thread Daniel Murphy via Digitalmars-d
Ola Fosheim Grøstad  wrote in message 
news:aimenbdjdflzgkkte...@forum.dlang.org...


Hardly, you have to be specific and make the number of issues covered in 
the next release small enough to create a feeling of being within reach in 
a short time span. People who don't care about fixing current issues 
should join a working group focusing on long term efforts (such as new 
features, syntax changes etc).


Saying it will work doesn't make it so.

That's good, people should not expect experimental features or unpolished 
implementations to be added to the next release. What goes into the next 
release should be decided on before you start on it.


That's nice an all, but if you can't get developers to work on the features 
you've decided on then all you end up doing is indefinitely postponing other 
contributions.


I do agree that work should be polished before it is merged, but good luck 
convincing Walter to stop merging work-in-progress features into master. 
I've been on both sides of that argument and neither way is without 
drawbacks, with the current contributor energy we have available. 



Davidson/TJB - HDF5 - Re: Walter's DConf 2014 Talks - Topics in Finance

2014-12-22 Thread Laeeth Isharc via Digitalmars-d

On Saturday, 22 March 2014 at 14:33:02 UTC, TJB wrote:
On Saturday, 22 March 2014 at 13:10:46 UTC, Daniel Davidson 
wrote:
Data storage for high volume would also be nice. A D 
implementation of HDF5, via wrappers or otherwise, would be a 
very useful project. Imagine how much more friendly the API 
could be in D. Python's tables library makes it very simple. 
You have to choose a language to not only process and 
visualize data, but store and access it as well.


Thanks
Dan


Well, I for one, would be hugely interested in such a thing.  A
nice D API to HDF5 would be a dream for my data problems.

Did you use HDF5 in your finance industry days then?  Just
curious.

TJB


Well for HDF5 - the bindings are here now - pre alpha but will 
get there soone enough - and wrappers coming along also.


Any thoughts/suggestions/help appreciated.  Github here:

https://github.com/Laeeth/d_hdf5


I wonder how much work it would be to port or implement Pandas 
type functionality in a D library.


Re: Invariant for default construction

2014-12-22 Thread Daniel Murphy via Digitalmars-d

Walter Bright  wrote in message news:m78i71$1c2h$1...@digitalmars.com...

It all depends on how invariant is defined. It's defined as an invariant 
on what it owns, not whatever is referenced by the object.


Whether or not it owns the data it references is application specific. 
Where are you saying the correct place to put a check like my example, to 
ensure that an owned object correctly references its parent? 



Re: Rectangular multidimensional arrays for D

2014-12-22 Thread Laeeth Isharc via Digitalmars-d

On Friday, 11 October 2013 at 22:41:06 UTC, H. S. Teoh wrote:
What's the reason Kenji's pull isn't merged yet? As I see it, 
it does
not introduce any problematic areas, but streamlines 
multidimensional
indexing notation in a nice way that fits in well with the rest 
of the

language. I, for one, would push for it to be merged.

In any case, I've seen your multidimensional array 
implementation
before, and I think it would be a good thing to have it in 
Phobos. In
fact, I've written my own as well, and IIRC one or two other 
people have

done the same. Clearly, the demand is there.

See also the thread about std.linalg; I think before we can 
even talk
about having linear algebra code in Phobos, we need a 
solidly-designed
rectangular array API. As I said in that other thread, matrix 
algebra
really should be built on top of a solid rectangular array API, 
and not
be yet another separate kind of type that's similar to, but 
incompatible

with rectangular arrays. A wrapper type can be used to make a
rectangular array behave in the linear algebra sense (i.e. 
matrix

product instead of per-element multiplication).



Hi.

I wondered how things were developing with the rectangular arrays 
(not sure who is in charge of reviewing, but I guess it is not HS 
Teoh).  It would be interesting to see this being available for 
D, and I agree with others that it is one of the key foundation 
blocks one would need to see in place before many other useful 
libraries can be built on top.


Let me know if anything I can help with (although cannot promise 
to have time, I will try).



Laeeth.


Re: What is the D plan's to become a used language?

2014-12-22 Thread Bienlein via Digitalmars-d


People have already suggested you to actually try vibe.d at 
least once before repeating CSP is necessary for easy async 
mantra.


I was trying to point out in some previous thread that the value 
of CSP is that concurrent things from the code looks like 
sync calls (not async, but sync). The statement above again 
says async and not sync (in CSP is necessary for easy async 
mantra.). So, I'm not sure the point was understood.


Asynchronous programming is very difficult to get right and also 
inherently difficult. Programming with channels where things look 
like synchronous calls make concurrent programming immensely 
easier than with asynchronous programming. If you have done 
asynchronous programming for some years and then only spend 1/2 h 
looking at concurrency in Go you will grasp immediately that this 
is a lot lot simpler. All cores are being made used of very 
evenly out of the box and are constantly under high load. You 
have to work very hard for long time to achieve the same in 
Java/C/C++/C#/whatever, because the threading model is 
conventional. With CSP-style concurrency in Go it is a lot easier 
to write concurrent server side applications and whatever you do 
can hold 40'000 network connections out of the box. Yes, you can 
do that with vibe.d as well. But for Go you only need to learn a 
drop simple language and you can start writing your server 
application, because all you need for concurrency is in the 
language.


One idea would be to add a drop dead simple abstraction layer for 
vibe.d to provide the same and sell D as a language for server 
side development like Go. There is a need for a unique selling 
point. Let's say the guys at Docker had chosen D, because it had 
that already. Then they would realize that they also can use D 
for general purpose programming and be happy. But first there has 
to be a unique selling point. The selling point of a better C++ 
has not worked out. You have to accept that and move on. Not 
accepting that time moves on is not an option.


Sorry, but wrong and wrong. Go has a model of concurrency and 
parallelism that works very well and no other language has, so 
Go has technical merit.


The technical merit is in the concurrency model as already said 
in the statement above. And currently is the time of server side 
software development. When C++ was started it was time for some 
better C. That time is over. Things change constantly and there 
is nothing you can do about that. You can accept that things have 
moved on and make use of the new chance of server side 
programming as a new selling point or continue living in the 
past. Go might be simplistic. So add CSP-style channels to D and 
you can overtake Go in all respects very easily. Besides, Haskell 
also has channel-based inter-process communication. If that is 
not academical/scientiic backing then I don't know.


Re: What's missing to make D2 feature complete?

2014-12-22 Thread Peter Alexander via Digitalmars-d

On Saturday, 20 December 2014 at 17:40:06 UTC, Martin Nowak wrote:

Just wondering what the general sentiment is.

For me it's these 3 points.

- tuple support (DIP32, maybe without pattern matching)
- working import, protection and visibility rules (DIP22, 313, 
314)

- finishing non-GC memory management


In my mind there are a few categories of outstanding issues.

First, there are cases where the language just does not work as
advertised. Imports are an example of this. Probably scope as
well and maybe shared (although I'm not sure what the situation
with that is).

Second, there are cases where the language works as designed, but
the design makes it difficult to get work done. For example,
@nogc and exceptions, or const with templates (or const
altogether). Order of conditional compilation needs to be defined
(see deadalnix's DIP).

And finally there's the things we would really like for D to be
successful. Tuple support and memory management are examples of
those. This category is essentially infinite.

I really think the first two categories need to be solved before
anything is frozen.


Re: What's missing to make D2 feature complete?

2014-12-22 Thread Dejan Lekic via Digitalmars-d

On Saturday, 20 December 2014 at 17:40:06 UTC, Martin Nowak wrote:

Just wondering what the general sentiment is.

For me it's these 3 points.

- tuple support (DIP32, maybe without pattern matching)
- working import, protection and visibility rules (DIP22, 313, 
314)

- finishing non-GC memory management


There is no feature complete language. What makes mainstream 
languages more likely candidates for future software projects is 
the fact that they are properly maintained by a team of 
professionals language community trusts.


I can give Java and C++ as perfect examples. (I am doing this 
mostly because these two are what I used most of the time in my 
professional career)
- None of them is feature complete, yet they are most likely 
candidate languages for many future software projects. Why? I 
believe the major reason why is that there is a well-defined 
standardization process, and what is more important, there are 
companies behind these languages. Naturally, this makes the new 
features come to the language *extremely slowly* (we talk 10+ 
years here).


Perhaps the best course of action is to extract the stable 
features that D has now, and fork a stable branch that is 
maintained by people who are actually using that stable version 
of D in *their products*. This is crucial because it is in their 
own interest to have this branch as stable as possible.


Problem with D is that it is pragmatic language, and this 
problem is why I love D. The reason I say it is a problem is 
because there are subcommunities and people with their own view 
on how things should be. Examples are numerous: GC vs noGC, 
functional vs OOP, pro- and anti- heavily templated D code. Point 
is - it is hard to satisfy all.


Re: What is the D plan's to become a used language?

2014-12-22 Thread via Digitalmars-d

On Monday, 22 December 2014 at 08:22:35 UTC, Daniel Murphy wrote:
Ola Fosheim Grøstad  wrote in message 
news:aimenbdjdflzgkkte...@forum.dlang.org...


Hardly, you have to be specific and make the number of issues 
covered in the next release small enough to create a feeling 
of being within reach in a short time span. People who don't 
care about fixing current issues should join a working group 
focusing on long term efforts (such as new features, syntax 
changes etc).


Saying it will work doesn't make it so.


You need a core team, the core team needs to be able to cooperate 
on the most important features for the greater good. Then you 
have outside contributors with special interests, perhaps even 
educational (like a master student) that could make great long 
term contributions if you established work groups headed by 
people who knew the topic well.


More importantly: it makes no business sense to invest in an open 
source project that shows clear signs of being mismanaged. Create 
a spec that has business value, manage the project well and 
people with a commercial interest will invest. Why would I 
contribute to the compiler if I see no hope of it ever reaching a 
stable release that is better than the alternatives from a 
commercial perspective?


Re: What's missing to make D2 feature complete?

2014-12-22 Thread bioinfornatics via Digitalmars-d

On Saturday, 20 December 2014 at 20:14:21 UTC, Ola Fosheim
Grøstad wrote:
On Saturday, 20 December 2014 at 17:40:06 UTC, Martin Nowak 
wrote:

Just wondering what the general sentiment is.


I think the main problem is what is there already, which 
prevents more sensible performance features from being added 
and also is at odds with ensuring correctness.


By priority:

1. A well thought out ownership system to replace GC with 
compiler protocols/mechanisms that makes good static analysis 
possible and pointers alias free.  It should be designed before 
scope is added and a GC-free runtime should be available.


2. Redesign features and libraries to better support AVX 
auto-vectorization as well as explicit AVX programming.


3. Streamlined syntax.

4. Fast compiler-generated allocators with pre-initialization 
for class instancing (get rid off emplace). Profiling based.


5. Monotonic integers (get rid of modular arithmetics) with 
range constraints.


6. Constraints/logic based programming for templates

7. Either explict virtual or de-virtualizing class functions 
(whole program optimization).


8. Clean up the function signatures: ref, in, out, inout and 
get rid of call-by-name lazy which has been known to be a bug 
inducing feature since Algol60. There is a reason for why other 
languages avoid it.


9. Local precise GC with explicit collection for catching 
cycles in graph data-structures.


10. An alternative to try-catch exceptions that enforce 
error-checking without a performance penalty. E.g. separate 
error tracking on returns or transaction style exceptions 
(jump to root and free all resources on failure).


+1000

I will add be consistent into phobos:
- remove all old module as std.mmfile
- put everywere @safe system trusted ...
- use everywhere as possible immutability ( const ref, in,
immutable )
- doing a smaller project with only working and non-deprecated
module
- std.stream
- consistant use of range into phobos


Re: What's missing to make D2 feature complete?

2014-12-22 Thread Francesco Cattoglio via Digitalmars-d

On Saturday, 20 December 2014 at 20:13:31 UTC, weaselcat wrote:
On Saturday, 20 December 2014 at 17:40:06 UTC, Martin Nowak 
wrote:

Just wondering what the general sentiment is.

For me it's these 3 points.

- tuple support (DIP32, maybe without pattern matching)
- working import, protection and visibility rules (DIP22, 313, 
314)

- finishing non-GC memory management


Unique! and RefCounted! in a usable state.


+1

No RefCounted classes and non-reentrant GC makes it really 
awkward to write libraries that handle non-memory resources in a 
nice way.
My experience with (old versions of) GFM has been horrible at 
times: you have to close() everything by yourself, if you forget 
about that sooner or later the GC will collect something, proc a 
call to close(), which in turns procs a call to the logger, which 
will end up with a InvalidMemoryOperationError.
Not being able to allocate during ~this() can be extremely 
annoying for me.


Re: What's missing to make D2 feature complete?

2014-12-22 Thread via Digitalmars-d

On Monday, 22 December 2014 at 11:03:33 UTC, bioinfornatics wrote:

- use everywhere as possible immutability ( const ref, in,
immutable )


Thanks, I forgot that one. Immutable values by default is indeed 
an important improvement. All by-value parameters to functions 
should be immutable, period.


Re: Rectangular multidimensional arrays for D

2014-12-22 Thread aldanor via Digitalmars-d
A gap in multi-dimensional rectangular arrays functionality in D 
is sure a huge blocker when trying to use it for data science 
tasks. Wonder what's the general consensus on this?


Re: BNF grammar for D?

2014-12-22 Thread Kingsley via Digitalmars-d

On Sunday, 21 December 2014 at 00:34:06 UTC, Kingsley wrote:
On Friday, 19 December 2014 at 02:53:02 UTC, Rikki Cattermole 
wrote:

On 19/12/2014 10:19 a.m., Kingsley wrote:
On Wednesday, 17 December 2014 at 21:05:05 UTC, Kingsley 
wrote:



Hi Bruno,

Thanks very much. I do have a couple of questions about DDT 
in

relation to my plugin.

Firstly - I'm not too familiar with parsing/lexing but at 
the moment

the Psi Structure I have implemented that comes from the DDT
parser/lexer is not in any kind of hierarchy. All the 
PsiElements are
available but all at the same level. Is this how the DDT 
parser
works? Or is it down to my implementation of the 
Parser/Lexer that

wraps it to create some hierarchy.

For intellij it's going to be vastly easier to have a 
hierarchy with
nested elements in order to get hold of a structure 
representing a
class or a function for example - in order to do things 
like get the
start and end lines of a class definition in order to apply 
code

folding and to use for searching for classes and stuff.

Secondly - how active it the development of DDT - does it 
keep up

with the D2 releases.

--Kingsley


After doing a bit more research it looks like I have to 
create the psi
hierarchy myself - my current psi structure is flat because 
I'm just
converting the DeeTokens into PsiElements directly. I've 
still got
some experimentation to do. On the plus side I implemented 
commenting,

code folding but everything else needs a psi hierarchy


I've done some more investigation and I do need to build the 
parser
myself in order to create the various constructs. I've made a 
start but
I haven't gotten very far yet because I don't fully 
understand the

correct way to proceed.

I also had a look at using the DeeParser - because it already 
does most
of what I want. However the intellij plugin wants a PsiParser 
which
returns an intellij ASTNode in the primary parse method. I 
can't see an
easy way to hook this up with DeeParser because the 
ParsedResult
although had a node method on it - gives back the wrong type 
of ASTNode.


Any pointers on how I might get the DeeParser to interface to 
an

intellij ASTNode would be appreciated.


Read my codebase again, it'll answer a lot of questions. Your 
parser is different, but what it produces shouldn't be. and 
yes it supports hierarchies.


Hi

So finally after a lot of wrestling with the internals of 
intellij I finally managed to get a working parser 
implementation that produces a psi hierarchy based on the 
DeeParser from the ddt code.


The main issue was that Intellij only wants you to create a 
parser using their toolset - which is either with a BNF grammar 
that you can then generate the parser - or with a hand written 
parser. Since I'm already using the DDT lexer and there is a 
perfectly good DDT parser as well - I just wanted to re-use the 
DDT parser.


Hi Bruno - would be easy to return the list of tokens included 
for each node in the DeeParser?


However Intellij does not provide any way to create a custom 
AST/PSI structure or use an external parser. So I basically had 
to wrap the DeeParse inside the Intellij parser and sync them 
up programmatically. It's not the most efficient way in the 
world but it at least works.


In the long term I will write a BNF grammar for Intellij (using 
their toolkit) but I can see that will take me several months 
so this is a quick way to get the plugin up and running with 
all the power of intellij extras without spending several 
months stuck learning all about the complexities of grammar 
parsing and lexing.


Thanks very much for you help. Once I get a bit more of the 
cool stuff done I will release the plugin.




Re: ARMv7 vs x86-64: Pathfinding benchmark of C++, D, Go, Nim, Ocaml, and more.

2014-12-22 Thread logicchains via Digitalmars-d

On Sunday, 21 December 2014 at 09:48:24 UTC, Dicebot wrote:
On Saturday, 20 December 2014 at 21:47:24 UTC, Walter Bright 
wrote:

I did notice this:

I updated the ldc D compiler earlier today (incidentally, as 
part of upgrading my system with pacman -Syu), and now it 
doesn't compile at all. It was previously compiling, and ran 
at around 90% the speed of C++ on ARM.


Sigh.


I have deployed experimental LDC package exactly to be able to 
detect such issues, otherwise it will never get there. It will 
be either fixed within a week or reverted to old mode.


I installed the new Arch Linux LDC package but it still fails 
with the same error: /usr/lib/libldruntime.so: undefined 
reference to `__mulodi4'


I did get GDC to work on ARM, but for some reason the resulting 
executable is horribly slow, around five times slower than what 
LDC produced. Are there any flags other than -O3 that I should be 
using?


Re: What's missing to make D2 feature complete?

2014-12-22 Thread Kagamin via Digitalmars-d
- delegates is another type system hole, if it's not going to be 
fixed, then it should be documented

- members of Object
- evaluate contracts at the caller side
- streams
- reference type AA


Re: Davidson/TJB - HDF5 - Re: Walter's DConf 2014 Talks - Topics in Finance

2014-12-22 Thread aldanor via Digitalmars-d

On Monday, 22 December 2014 at 08:35:59 UTC, Laeeth Isharc wrote:

On Saturday, 22 March 2014 at 14:33:02 UTC, TJB wrote:
On Saturday, 22 March 2014 at 13:10:46 UTC, Daniel Davidson 
wrote:
Data storage for high volume would also be nice. A D 
implementation of HDF5, via wrappers or otherwise, would be a 
very useful project. Imagine how much more friendly the API 
could be in D. Python's tables library makes it very simple. 
You have to choose a language to not only process and 
visualize data, but store and access it as well.


Thanks
Dan


Well, I for one, would be hugely interested in such a thing.  A
nice D API to HDF5 would be a dream for my data problems.

Did you use HDF5 in your finance industry days then?  Just
curious.

TJB


Well for HDF5 - the bindings are here now - pre alpha but will 
get there soone enough - and wrappers coming along also.


Any thoughts/suggestions/help appreciated.  Github here:

https://github.com/Laeeth/d_hdf5


I wonder how much work it would be to port or implement Pandas 
type functionality in a D library.


@Laeeth

As a matter of fact, I've been working on HDF5 bindings for D as 
well -- I'm done with the binding/wrapping part so far (with 
automatic throwing of D exceptions whenever errors occur in the C 
library, and other niceties) and am hacking at the higher level 
OOP API -- can publish it soon if anyone's interested :) Maybe we 
can join efforts and make it work (that and standardizing a 
multi-dimensional array library in D).


Re: Davidson/TJB - HDF5 - Re: Walter's DConf 2014 Talks - Topics in Finance

2014-12-22 Thread Laeeth Isharc via Digitalmars-d

On Monday, 22 December 2014 at 11:59:11 UTC, aldanor wrote:

@Laeeth

As a matter of fact, I've been working on HDF5 bindings for D 
as well -- I'm done with the binding/wrapping part so far (with 
automatic throwing of D exceptions whenever errors occur in the 
C library, and other niceties) and am hacking at the higher 
level OOP API -- can publish it soon if anyone's interested :) 
Maybe we can join efforts and make it work (that and 
standardizing a multi-dimensional array library in D).



Oh, well :)  I would certainly be interested to see what you 
have, even if not finished yet.  My focus was sadly getting 
something working soon in a sprint, rather than building 
something excellent later, and I would think your work will be 
cleaner.


In any case, I would very much be interested in exchanging ideas 
or working together - on HDF5, on multi-dim or on other projects 
relating to finance/quant/scientific computing and the like.  So 
maybe you could send me a link when you are ready - either post 
here or my email address is my first name at my first name.com


Thanks.


Re: Walter's DConf 2014 Talks - Topics in Finance

2014-12-22 Thread Laeeth Isharc via Digitalmars-d

On Saturday, 22 March 2014 at 00:14:11 UTC, Daniel Davidson wrote:

On Friday, 21 March 2014 at 21:14:15 UTC, TJB wrote:

Walter,

I see that you will be discussing High Performance Code Using 
D at the 2014 DConf. This will be a very welcomed topic for 
many of us.  I am a Finance Professor.  I currently teach and 
do research in computational finance.  Might I suggest that 
you include some finance (say Monte Carlo options pricing) 
examples?  If you can get the finance industry interested in D 
you might see a massive adoption of the language.  Many are 
desperate for an alternative to C++ in that space.


Just a thought.

Best,

TJB


Maybe a good starting point would be to port some of QuantLib 
and see how the performance compares. In High Frequency Trading 
I think D would be a tough sell, unfortunately.


Thanks
Dan


In case it wasn't obvious from the discussion that followed: 
finance is a broad field with many different kinds of creature 
within, and there are different kinds of problems faced by 
different participants.


High Frequency Trading has peculiar requirements (relating to 
latency, amongst other things) that will not necessarily be 
representative of other areas.  Even within this area there is a 
difference between the needs of a Citadel in its option 
marketmaking activity versus the activity of a pure delta HFT 
player (although they also overlap).


A JP Morgan that needs to be able to price and calculate risk for 
large portfolios of convex instruments in its vanilla and exotic 
options books has different requirements, again.


You would typically use Monte Carlo (or quasi MC) to price more 
complex products for which there is not a good analytical 
approximation.  (Or to deal with the fact that volatility is not 
constant).  So that fits very much with the needs of large banks 
- and perhaps some hedge funds - but I don't think a typical HFT 
guy would be all that interested to know about this.  They are 
different domains.


Quant/CTA funds also have decent computational requirements, but 
these are not necessarily high frequency.  Winton Capital, for 
example, is one of the larger hedge funds in Europe by assets, 
but they have talked publicly about emphasizing longer-term 
horizons because even in liquid markets there simply is not the 
liquidity to turn over the volume they would need to to make an 
impact on their returns.  In this case, whilst execution is 
always important, the research side of things is where the value 
gets created.  And its not unusual to have quant funds where 
every portfolio manager also programs.  (I will not mention 
names).  One might think that rapid iteration here could have 
value.


http://www.efinancialcareers.co.uk/jobs-UK-London-Senior_Data_Scientist_-_Quant_Hedge_Fund.id00654869

Fwiw having spoken to a few people the past few weeks, I am 
struck by how hollowed-out front office has become, both within 
banks and hedge funds.  It's a nice business when things go well, 
but there is tremendous operating leverage, and if one builds up 
fixed costs then losing assets under management and having a poor 
period of performance (which is part of the game, not necessarily 
a sign of failure) can quickly mean that you cannot pay people 
(more than salaries) - which hurts morale and means you risk 
losing your best people.


So people have responded by paring down quant/research support to 
producing roles, even when that makes no sense.  (Programmers are 
not expensive).  In that environment, D may offer attractive 
productivity without sacrificing performance.


cross post hn: (Rust) _ _ without GC

2014-12-22 Thread Vic via Digitalmars-d

https://news.ycombinator.com/item?id=8781522

http://arthurtw.github.io/2014/12/21/rust-anti-sloppy-programming-language.html

c'est possible!

Oh how much free time and stability there would be if D core 
*moved* GC downstream.


Vic
ps: more cows waiting for slaughter: 
http://dlang.org/comparison.html


Re: ARMv7 vs x86-64: Pathfinding benchmark of C++, D, Go, Nim, Ocaml, and more.

2014-12-22 Thread Iain Buclaw via Digitalmars-d
On 22 December 2014 at 11:45, logicchains via Digitalmars-d
digitalmars-d@puremagic.com wrote:
 On Sunday, 21 December 2014 at 09:48:24 UTC, Dicebot wrote:

 On Saturday, 20 December 2014 at 21:47:24 UTC, Walter Bright wrote:

 I did notice this:

 I updated the ldc D compiler earlier today (incidentally, as part of
 upgrading my system with pacman -Syu), and now it doesn't compile at all. It
 was previously compiling, and ran at around 90% the speed of C++ on ARM.

 Sigh.


 I have deployed experimental LDC package exactly to be able to detect such
 issues, otherwise it will never get there. It will be either fixed within a
 week or reverted to old mode.


 I installed the new Arch Linux LDC package but it still fails with the same
 error: /usr/lib/libldruntime.so: undefined reference to `__mulodi4'

 I did get GDC to work on ARM, but for some reason the resulting executable
 is horribly slow, around five times slower than what LDC produced. Are there
 any flags other than -O3 that I should be using?

Other than -frelease (to turn off most non-release code generation), no.

Can you get a profiler on it to see where it's spending most of it's time?

Thanks
Iain.


Re: Walter's DConf 2014 Talks - Topics in Finance

2014-12-22 Thread aldanor via Digitalmars-d

On Monday, 22 December 2014 at 12:24:52 UTC, Laeeth Isharc wrote:


In case it wasn't obvious from the discussion that followed: 
finance is a broad field with many different kinds of creature 
within, and there are different kinds of problems faced by 
different participants.


High Frequency Trading has peculiar requirements (relating to 
latency, amongst other things) that will not necessarily be 
representative of other areas.  Even within this area there is 
a difference between the needs of a Citadel in its option 
marketmaking activity versus the activity of a pure delta HFT 
player (although they also overlap).


A JP Morgan that needs to be able to price and calculate risk 
for large portfolios of convex instruments in its vanilla and 
exotic options books has different requirements, again.


You would typically use Monte Carlo (or quasi MC) to price more 
complex products for which there is not a good analytical 
approximation.  (Or to deal with the fact that volatility is 
not constant).  So that fits very much with the needs of large 
banks - and perhaps some hedge funds - but I don't think a 
typical HFT guy would be all that interested to know about 
this.  They are different domains.


Quant/CTA funds also have decent computational requirements, 
but these are not necessarily high frequency.  Winton Capital, 
for example, is one of the larger hedge funds in Europe by 
assets, but they have talked publicly about emphasizing 
longer-term horizons because even in liquid markets there 
simply is not the liquidity to turn over the volume they would 
need to to make an impact on their returns.  In this case, 
whilst execution is always important, the research side of 
things is where the value gets created.  And its not unusual to 
have quant funds where every portfolio manager also programs.  
(I will not mention names).  One might think that rapid 
iteration here could have value.


http://www.efinancialcareers.co.uk/jobs-UK-London-Senior_Data_Scientist_-_Quant_Hedge_Fund.id00654869

Fwiw having spoken to a few people the past few weeks, I am 
struck by how hollowed-out front office has become, both within 
banks and hedge funds.  It's a nice business when things go 
well, but there is tremendous operating leverage, and if one 
builds up fixed costs then losing assets under management and 
having a poor period of performance (which is part of the game, 
not necessarily a sign of failure) can quickly mean that you 
cannot pay people (more than salaries) - which hurts morale and 
means you risk losing your best people.


So people have responded by paring down quant/research support 
to producing roles, even when that makes no sense.  
(Programmers are not expensive).  In that environment, D may 
offer attractive productivity without sacrificing performance.


I agree with most of these points.

For some reason, people often relate quant finance / high 
frequency trading with one of the two: either ultra-low-latency 
execution or option pricing, which is just wrong. In most 
likelihood, the execution is performed on FPGA co-located grids, 
so that part is out of question; and options trading is just one 
of so many things hedge funds do. What takes the most time and 
effort is the usual data science (which in many cases boil down 
to data munging), as in, managing huge amounts of raw 
structured/unstructured high-frequency data; extracting the 
valuable information and learning strategies; implementing 
fast/efficient backtesting frameworks, simulators etc. The need 
for efficiency here naturally comes from the fact that a 
typical task in the pipeline requires dozens/hundreds GB of RAM 
and dozens of hours of runtime on a high-grade box (so noone 
would really care if that GC is going to stop the world for 0.05 
seconds).


In this light, as I see it, D's main advantage is a high 
runtime-efficiency / time-to-deploy ratio (whereas one of the 
main disadvantages for practitioners would be the lack of 
standard tools for working with structured multidimensional data 
+ linalg, something like numpy or pandas).


Cheers.


Re: What is the D plan's to become a used language?

2014-12-22 Thread Joakim via Digitalmars-d
On Monday, 22 December 2014 at 10:30:47 UTC, Ola Fosheim Grøstad 
wrote:
More importantly: it makes no business sense to invest in an 
open source project that shows clear signs of being mismanaged. 
Create a spec that has business value, manage the project well 
and people with a commercial interest will invest. Why would I 
contribute to the compiler if I see no hope of it ever reaching 
a stable release that is better than the alternatives from a 
commercial perspective?


It is not clear that the core team wants commercial investment, 
that's merely my guess of how D might become more polished and 
popular.  AFAICT, Andrei and Walter hope to get to a million 
users mostly through volunteers, which is a pipe dream if you ask 
me, though they don't appear to be against commercial 
involvement.  As you say, without presenting a more organized 
front, maybe such commercial investment is unlikely.


Re: ARMv7 vs x86-64: Pathfinding benchmark of C++, D, Go, Nim, Ocaml, and more.

2014-12-22 Thread via Digitalmars-d

On Monday, 22 December 2014 at 12:43:19 UTC, Iain Buclaw via
Digitalmars-d wrote:

On 22 December 2014 at 11:45, logicchains via Digitalmars-d
digitalmars-d@puremagic.com wrote:

On Sunday, 21 December 2014 at 09:48:24 UTC, Dicebot wrote:


On Saturday, 20 December 2014 at 21:47:24 UTC, Walter Bright 
wrote:


I did notice this:

I updated the ldc D compiler earlier today (incidentally, 
as part of
upgrading my system with pacman -Syu), and now it doesn't 
compile at all. It
was previously compiling, and ran at around 90% the speed of 
C++ on ARM.


Sigh.



I have deployed experimental LDC package exactly to be able 
to detect such
issues, otherwise it will never get there. It will be either 
fixed within a

week or reverted to old mode.



I installed the new Arch Linux LDC package but it still fails 
with the same
error: /usr/lib/libldruntime.so: undefined reference to 
`__mulodi4'


I did get GDC to work on ARM, but for some reason the 
resulting executable
is horribly slow, around five times slower than what LDC 
produced. Are there

any flags other than -O3 that I should be using?


Other than -frelease (to turn off most non-release code 
generation), no.


Can you get a profiler on it to see where it's spending most of 
it's time?


Thanks
Iain.


With the GDC build, the GC stops the main thread every single
time getLongestPath is executed. This does not happen with the
LDC build.

See :
http://unix.cat/d/lpathbench/callgrind.out.GDC
http://unix.cat/d/lpathbench/callgrind.out.LDC


Re: ARMv7 vs x86-64: Pathfinding benchmark of C++, D, Go, Nim, Ocaml, and more.

2014-12-22 Thread logicchains via Digitalmars-d
On Monday, 22 December 2014 at 12:43:19 UTC, Iain Buclaw via 
Digitalmars-d wrote:

On 22 December 2014 at 11:45, logicchains via Digitalmars-d
digitalmars-d@puremagic.com wrote:

On Sunday, 21 December 2014 at 09:48:24 UTC, Dicebot wrote:


On Saturday, 20 December 2014 at 21:47:24 UTC, Walter Bright 
wrote:


I did notice this:

I updated the ldc D compiler earlier today (incidentally, 
as part of
upgrading my system with pacman -Syu), and now it doesn't 
compile at all. It
was previously compiling, and ran at around 90% the speed of 
C++ on ARM.


Sigh.



I have deployed experimental LDC package exactly to be able 
to detect such
issues, otherwise it will never get there. It will be either 
fixed within a

week or reverted to old mode.



I installed the new Arch Linux LDC package but it still fails 
with the same
error: /usr/lib/libldruntime.so: undefined reference to 
`__mulodi4'


I did get GDC to work on ARM, but for some reason the 
resulting executable
is horribly slow, around five times slower than what LDC 
produced. Are there

any flags other than -O3 that I should be using?


Other than -frelease (to turn off most non-release code 
generation), no.


Can you get a profiler on it to see where it's spending most of 
it's time?


Thanks
Iain.


I ran callgrind on it, 75% of the runtime is spent in 
_D2gc2gc2GC6malloc, and 5% in reduce.


Re: Rewrite rules for ranges

2014-12-22 Thread renoX via Digitalmars-d

On Saturday, 20 December 2014 at 14:16:05 UTC, bearophile wrote:
When you use UFCS chains there are many coding patterns that 
probably are hard to catch for the compiler, but are easy to 
optimize very quickly:

[cut]

.reverse.reverse = id


.reverse.reverse is a coding pattern??

;-)

renoX


Re: Rewrite rules for ranges

2014-12-22 Thread bearophile via Digitalmars-d

renoX:


.reverse.reverse is a coding pattern??


Yes, similar patterns can come out after inlining.

Bye,
bearophile


Re: Do everything in Java…

2014-12-22 Thread via Digitalmars-d
On Thursday, 18 December 2014 at 08:56:29 UTC, ketmar via 
Digitalmars-d wrote:

On Thu, 18 Dec 2014 08:09:08 +
via Digitalmars-d digitalmars-d@puremagic.com wrote:

On Thursday, 18 December 2014 at 01:16:38 UTC, H. S. Teoh via 
Digitalmars-d wrote:
 On Thu, Dec 18, 2014 at 12:37:43AM +, via Digitalmars-d 
 wrote:
 Regular HD I/O is quite slow, but with fast SSD on PCIe and 
 a good

 database-like index locked to memory…

 That's hardly a solution that will work for the general D 
 user, many of

 whom may not have this specific setup.

By the time this would be ready, most programmers will have 
PCIe interfaced SSD. At 100.000 IOPS it is pretty ok.


didn't i say that the whole 64-bit hype sux? ;-) that's about 
memory

as database.


Heh, btw, I just read on osnews.com that HP is going to create a 
new hardware platform The Machine and a new operating system for 
it based on resistor based non-volatile memory called memristors 
that is comparable to dram in speed. Pretty interesting actually:


http://www.technologyreview.com/news/533066/hp-will-release-a-revolutionary-new-operating-system-in-2015/



Allocators stack

2014-12-22 Thread Allocator stack via Digitalmars-d

How about allocators stack? Allocator e.g. one of these
https://github.com/andralex/phobos/blob/allocator/std/allocator.d
-
allocatorStack.push(new GCAllocator);
//Some code that use memory allocation
auto a = ['x', 'y'];
a ~= ['a', 'b']; // use allocatorStack.top.realloc(...);
allocatorStack.pop();
-
Allocators must be equipped with dynamic polymorphism. For those
cases when it is too expensive attribute
@allocator(yourAllocator) applied to declaration set allocator
statically.

-
@allocator(Mallocator.instance)
void f()
{
// Implicitly use global(tls?) allocator Mallocator when allocate 
an

object or resize an array or etc.
}

@allocator(StackAllocator)
void f()
{
// Implicitly use allocatorStack.top() allocator when allocate an
object or resize an array or etc.
}
-

There is some issues to solve. E.g. how to eliminate mix memory 
from different allocators.


Re: ARMv7 vs x86-64: Pathfinding benchmark of C++, D, Go, Nim, Ocaml, and more.

2014-12-22 Thread Iain Buclaw via Digitalmars-d
On 22 December 2014 at 13:45, via Digitalmars-d
digitalmars-d@puremagic.com wrote:
 On Monday, 22 December 2014 at 12:43:19 UTC, Iain Buclaw via
 Digitalmars-d wrote:

 On 22 December 2014 at 11:45, logicchains via Digitalmars-d
 digitalmars-d@puremagic.com wrote:

 On Sunday, 21 December 2014 at 09:48:24 UTC, Dicebot wrote:


 On Saturday, 20 December 2014 at 21:47:24 UTC, Walter Bright wrote:


 I did notice this:

 I updated the ldc D compiler earlier today (incidentally, as part of
 upgrading my system with pacman -Syu), and now it doesn't compile at
 all. It
 was previously compiling, and ran at around 90% the speed of C++ on
 ARM.

 Sigh.



 I have deployed experimental LDC package exactly to be able to detect
 such
 issues, otherwise it will never get there. It will be either fixed
 within a
 week or reverted to old mode.



 I installed the new Arch Linux LDC package but it still fails with the
 same
 error: /usr/lib/libldruntime.so: undefined reference to `__mulodi4'

 I did get GDC to work on ARM, but for some reason the resulting
 executable
 is horribly slow, around five times slower than what LDC produced. Are
 there
 any flags other than -O3 that I should be using?


 Other than -frelease (to turn off most non-release code generation), no.

 Can you get a profiler on it to see where it's spending most of it's time?

 Thanks
 Iain.


 With the GDC build, the GC stops the main thread every single
 time getLongestPath is executed. This does not happen with the
 LDC build.

 See :
 http://unix.cat/d/lpathbench/callgrind.out.GDC
 http://unix.cat/d/lpathbench/callgrind.out.LDC


Thanks, looks like getLongestPath creates a closure - this causes
memory to be allocated every single time the function is called !!!

I imagine that LDC can boast smarter heuristics here - I recall David
talking about a memory optimisation that moves the heap allocation to
the stack if it can verify that the closure doesn't escape the
function.

We are a little behind the times on this - and so is DMD.

Regards
Iain.


Re: Do everything in Java…

2014-12-22 Thread deadalnix via Digitalmars-d
On Sunday, 21 December 2014 at 10:00:36 UTC, Russel Winder via 
Digitalmars-d wrote:
Although the vast majority of Java is used in a basically I/O 
bound
context, there is knowledge of and desire to improve Java in a 
CPU-
bound context. The goal here is to always be as fast as C and 
C++ for
all CPU-bound codes. A lot of people are already seeing Java 
being
faster than C and C++, but they have to use primitive types to 
achieve
this. With the shift to internal iteration and new JITS, the 
aim is to

achieve even better but using reference types in the code.



That is quite a claim. If it is true in some context, and I'd go 
as far as to say that vanilla code in C/C++ tend to be slower 
than the vanilla version in java, ultimately, C and C++ offer 
more flexibility, which mean that if you are willing to spend the 
time to optimize, Java won't be as fast. Generally, the killer is 
memory layout, which allow to fit more in cache, and be faster. 
Java is addicted to indirections.


Re: Walter's DConf 2014 Talks - Topics in Finance

2014-12-22 Thread Daniel Davidson via Digitalmars-d

On Monday, 22 December 2014 at 13:37:55 UTC, aldanor wrote:
For some reason, people often relate quant finance / high 
frequency trading with one of the two: either ultra-low-latency 
execution or option pricing, which is just wrong. In most 
likelihood, the execution is performed on FPGA co-located 
grids, so that part is out of question; and options trading is 
just one of so many things hedge funds do. What takes the most 
time and effort is the usual data science (which in many 
cases boil down to data munging), as in, managing huge amounts 
of raw structured/unstructured high-frequency data; extracting 
the valuable information and learning strategies;



This description feels too broad. Assume that it is the data 
munging that takes the most time and effort. Included in that 
usually involves some transformations like (Data - Numeric Data 
- Mathematical Data Procssing - Mathematical 
Solutions/Calibrations - Math consumers (trading systems low 
frequency/high frequency/in general)). The quantitative data 
science is about turning data into value using numbers. The 
better you are at first getting to an all numbers world to start 
analyzing the better off you will be. But once in the all numbers 
world isn't it all about math, statistics, mathematical 
optimization, insight, iteration/mining, etc? Isn't that right 
now the world of R, NumPy, Matlab, etc and more recently now 
Julia? I don't see D attempting to tackle that at this point. If 
the bulk of the work for the data sciences piece is the maths, 
which I believe it is, then the attraction of D as a data 
sciences platform is muted. If the bulk of the work is 
preprocessing data to get to an all numbers world, then in that 
space D might shine.



implementing fast/efficient backtesting frameworks, simulators 
etc. The need for efficiency here naturally comes from the 
fact that a typical task in the pipeline requires 
dozens/hundreds GB of RAM and dozens of hours of runtime on a 
high-grade box (so noone would really care if that GC is going 
to stop the world for 0.05 seconds).




What is a backtesting system in the context of Winton Capital? Is 
it primarily a mathematical backtesting system? If so it still 
may be better suited to platforms focusing on maths.


Re: Checksums of files from Downloads

2014-12-22 Thread Andrei Alexandrescu via Digitalmars-d

On 12/10/14 11:20 PM, AndreyZ wrote:

I wanted to join D community, but I realized that I even cannot
install tools from the site securely. (Correct me if I wrong.)

To dlang.org maintainers:

I trust you but I don't trust man-in-the-middle.

So, could you at least provide checksums (e.g. sha1) for all
files which are available on the following pages, please.
http://dlang.org/download.html
http://code.dlang.org/download

Also It would be great if you:
1) install good (not self-signed) SSL certificate to allow
visitors use HTTPS;
2) sign all *.exe files provided in download sections.


Added https://issues.dlang.org/show_bug.cgi?id=13887 -- Andrei


DConf 2015?

2014-12-22 Thread Adam D. Ruppe via Digitalmars-d
By this time last year, dconf 2014 preparations were already 
under way but I haven't heard anything this year. Is another one 
planned?


Re: What is the D plan's to become a used language?

2014-12-22 Thread deadalnix via Digitalmars-d

On Monday, 22 December 2014 at 01:08:00 UTC, ZombineDev wrote:
NO. Just don't use features that you don't understand or like, 
but

don't punish happy D users by demanding a crippled D version.


On Sunday, 21 December 2014 at 22:21:21 UTC, Vic wrote:

...


That is a valid argument if feature are orthogonal. When they 
aren't, it is just rhetorical bullshit.


Re: DIP66 v1.1 (Multiple) alias this.

2014-12-22 Thread Joseph Rushton Wakeling via Digitalmars-d

On 21/12/14 11:11, Dicebot via Digitalmars-d wrote:

On Sunday, 21 December 2014 at 08:23:34 UTC, deadalnix wrote:

See also: https://issues.dlang.org/show_bug.cgi?id=10996


I have nothing against this, but this is, indeed, completely out of the scope
(!) of the DIP.


I think it belongs to DIP22


In fact it's already in there:

A public alias to a private symbol makes the symbol
accessibly through the alias. The alias itself needs
to be in the same module, so this doesn't impair
protection control.

It's just not implemented for alias this.


Re: DIP66 v1.1 (Multiple) alias this.

2014-12-22 Thread Joseph Rushton Wakeling via Digitalmars-d

On 21/12/14 09:23, deadalnix via Digitalmars-d wrote:

I have nothing against this, but this is, indeed, completely out of the scope
(!) of the DIP.


Fair enough.  I wanted to make sure there was nothing here that could interact 
nastily with protection attributes.




Re: Walter's DConf 2014 Talks - Topics in Finance

2014-12-22 Thread aldanor via Digitalmars-d
On Monday, 22 December 2014 at 17:28:39 UTC, Daniel Davidson 
wrote:

I don't see D attempting to tackle that at this point.
If the bulk of the work for the data sciences piece is the 
maths, which I believe it is, then the attraction of D as a 
data sciences platform is muted. If the bulk of the work is 
preprocessing data to get to an all numbers world, then in that 
space D might shine.
That is one of my points exactly -- the bulk of the work, as 
you put it, is quite often the data processing/preprocessing 
pipeline (all the way from raw data parsing, aggregation, 
validation and storage to data retrieval, feature extraction, and 
then serialization, various persistency models, etc). One thing 
is fitting some model on a pandas dataframe on your lap in an 
ipython notebook, another thing is running the whole pipeline on 
massive datasets in production on a daily basis, which often 
involves very low-level technical stuff, whether you like it or 
not. Coming up with cool algorithms and doing fancy maths is fun 
and all, but it doesn't take nearly as much effort as integrating 
that same thing into an existing production system (or developing 
one from scratch). (and again, production != execution in this 
context)


On Monday, 22 December 2014 at 17:28:39 UTC, Daniel Davidson 
wrote:
What is a backtesting system in the context of Winton Capital? 
Is it primarily a mathematical backtesting system? If so it 
still may be better suited to platforms focusing on maths.
Disclaimer: I don't work for Winton :) Backtesting in trading is 
usually a very CPU-intensive (and sometimes RAM-intensive) task 
that can be potentially re-run millions of times to fine-tune 
some parameters or explore some sensitivities. Another common 
task is reconciling with how the actual trading system works 
which is a very low-level task as well.


Does anyone want to render D with gsource ?

2014-12-22 Thread Anoymous via Digitalmars-d

A few monthes ago I've seen this:

https://code.google.com/p/gource/

Does anyone want to render D with gsource (dmd/phobos) ?



Re: Does anyone want to render D with gsource ?

2014-12-22 Thread Andrej Mitrovic via Digitalmars-d
On 12/22/14, Anoymous via Digitalmars-d digitalmars-d@puremagic.com wrote:
 A few monthes ago I've seen this:

 https://code.google.com/p/gource/

Ahh I always wanted to see this visualization for dlang repos!!

Whoever makes this happen, 1000 internets to you.


Re: DIP69 - Implement scope for escape proof references

2014-12-22 Thread Walter Bright via Digitalmars-d

On 12/22/2014 12:04 AM, Dicebot wrote:

Point of transitive scope is to make easy to expose complex custom data
structures without breaking memory safety.


I do understand that. Making it work with the type system is another matter 
entirely - it's far more complex than just adding a qualifier. 'inout' looks 
simple but still has ongoing issues.


And the thing is, wrappers can be used instead of qualifiers, in the same places 
in the same way. It's much simpler.


Re: DConf 2015?

2014-12-22 Thread Walter Bright via Digitalmars-d

On 12/22/2014 9:40 AM, Adam D. Ruppe wrote:

By this time last year, dconf 2014 preparations were already under way but I
haven't heard anything this year. Is another one planned?


Yes. Still working on getting confirmation of the date.


Re: DConf 2015?

2014-12-22 Thread Iain Buclaw via Digitalmars-d
On 22 December 2014 at 20:52, Walter Bright via Digitalmars-d
digitalmars-d@puremagic.com wrote:
 On 12/22/2014 9:40 AM, Adam D. Ruppe wrote:

 By this time last year, dconf 2014 preparations were already under way but
 I
 haven't heard anything this year. Is another one planned?


 Yes. Still working on getting confirmation of the date.

You mean to say that it's moving from it's usual time slot next year?
(Weekend before spring bank holiday)


Re: Do everything in Java…

2014-12-22 Thread Jacob Carlborg via Digitalmars-d

On 2014-12-21 20:37, Adam D. Ruppe wrote:


1) versions don't match. Stuff like rvm and bundler can mitigate this,


I'm not exactly sure what you're meaning but using Rails without bundler 
is just mad.



but they don't help searching the web. Find a technique and try it...
but it requires Rails 2.17 and the app depends in 2.15 or something
stupid like that. I guess you can't blame them for adding new features,
but I do wish the documentation for old versions was always easy to get
to and always easily labeled so it would be obvious. (D could do this too!)


This page [1] contains documentation for Rails, for 4.1.x, 4.0.x, 3.2.x 
and 2.3.x. It's basically the latest version of a given branch. This 
page [2] contains the API reference for Rails, it's not easy to find but 
you can append vX.Y.Z to that URL to get a specific version.



2) SSL/TLS just seems to randomly fail in applications and the tools
like gem and bundle. Even updating the certificates on the system didn't
help most recently, I also had to set an environment variable, which
seems just strange.


I think I have seen that once or twice when upgrading to a new version 
of OS X. But that's usually because your gems and other software is 
still built for the older version. I can't recall seeing this for a new 
project.



3) Setting up the default WEBrick isn't too bad, but making it work on a
production system (like apache passenger) has been giving us trouble.
Got it working for the most part pretty fast, but then adding more stuff
became a painful config nightmare. This might be the application (based
on Rails 2 btw) more than the platform in general, but it still irked me.


I haven't been too involved in that part. I have set up one or two apps 
with passenger and it was pretty easy to just follow the installation. 
Although, that wasn't production servers.



4) It is abysmally slow, every little thing takes forever. DB changes,
slow. Asset recompiles: slow. Tests: slow. Restarting the server: slow.
The app itself: slow. I'm told Ruby on the JVM is faster though :)


Yeah, that's one major issue. It can be very, very slow. But I also 
think it's too easy code slow with something like ActiveRecord. It's 
easy to forget it's actual a database behind it.



My main problems with ruby on rails though are bad decisions and just
underwhelming aspect of actually using it. Everyone sells it as being
the best thing ever and so fast to develop against but I've seen better
like everything. Maybe it was cool in 2005 (if you could actually get it
running then...), but not so much anymore.


I find it difficult to find something better. I think that's mostly 
because of the existing ecosystem with plugins and libraries available. 
I feel the same thing with D vs Ruby. At some point I just get tired 
with developing my own libraries and just want to get something done.


[1] http://guides.rubyonrails.org/
[2] http://api.rubyonrails.org

--
/Jacob Carlborg


  1   2   >