Re: D at University of Minnesota

2013-08-18 Thread Ali Çehreli

What a wonderful report! :)

On 08/17/2013 08:22 AM, Carl Sturtivant wrote:

 Ali Çehreli's tutorial played a central role supporting students
 especially during the first half of the course --- without it the course
 simply would not have worked, so many thanks Ali --- and an important
 part of that is its linearity --- it can be read with only backward
 dependencies. This meant that with hard work even students of little
 experience and only moderate current abilities could get up to speed,
 and we saw just that. It is hard to overstate this factor.

Thank you, that made my day (and summer)! :)

Ali



Re: SIMD implementation of dot-product. Benchmarks

2013-08-18 Thread Andrei Alexandrescu

On 8/17/13 11:50 AM, Ilya Yaroshenko wrote:

http://spiceandmath.blogspot.ru/2013/08/simd-implementation-of-dot-product_17.html


Ilya


The images never load for me, all I see is some Request timed out 
stripes after the text.


Typo: Ununtu


Andrei



Re: SIMD implementation of dot-product. Benchmarks

2013-08-18 Thread Ilya Yaroshenko
On Sunday, 18 August 2013 at 16:32:33 UTC, Andrei Alexandrescu 
wrote:

On 8/17/13 11:50 AM, Ilya Yaroshenko wrote:

http://spiceandmath.blogspot.ru/2013/08/simd-implementation-of-dot-product_17.html


Ilya


The images never load for me, all I see is some Request timed 
out stripes after the text.


I have changed interactive charts to png images.
Does it works?



Typo: Ununtu


Andrei


Re: SIMD implementation of dot-product. Benchmarks

2013-08-18 Thread Iain Buclaw
On 17 August 2013 19:50, Ilya Yaroshenko ilyayaroshe...@gmail.com wrote:
 http://spiceandmath.blogspot.ru/2013/08/simd-implementation-of-dot-product_17.html

 Ilya



Having a quick flick through the simd.d source, I see LDC's and GDC's
implementation couldn't be any more wildly different... (LDC's doesn't
even look like D code thanks to pragma LDC_inline_ir :)


Thumbs up on marking functions as pure - I should really document
somewhere how gcc builtins are fleshed out to the gcc.builtins module
sometime...


-- 
Iain Buclaw

*(p  e ? p++ : p) = (c  0x0f) + '0';


Re: SIMD implementation of dot-product. Benchmarks

2013-08-18 Thread Andrei Alexandrescu

On 8/18/13 10:24 AM, Ilya Yaroshenko wrote:

On Sunday, 18 August 2013 at 16:32:33 UTC, Andrei Alexandrescu wrote:

On 8/17/13 11:50 AM, Ilya Yaroshenko wrote:

http://spiceandmath.blogspot.ru/2013/08/simd-implementation-of-dot-product_17.html



Ilya


The images never load for me, all I see is some Request timed out
stripes after the text.


I have changed interactive charts to png images.
Does it works?


Yes, thanks.

Andrei



Gumbo-d - D binding for Gumbo HTML 5 Parser

2013-08-18 Thread Christopher Bertels

Hey everyone,

I started using D recently and have enjoyed the experience so far.
I've used vibe.d for a web service I wrote for work and I've 
recently started working on a D binding for Google's new Gumbo 
HTML 5 parser library that was recently released [1].


Here's a comparison of one of the examples that comes with gumbo:

Original:
https://github.com/google/gumbo-parser/blob/master/examples/get_title.c
gumbo-d: 
https://github.com/bakkdoor/gumbo-d/blob/master/examples/get_title.d


I've added some helper methods for searching  dealing with child 
nodes in the DOM, and it's been really easy as a newcomer to come 
up with the right compile-time templates. Can't say it's ever 
been this easy in C++ or any other language.


Let me know what you think of the code. I started reading TDPL, 
but I haven't gotten very far yet and I've learned mostly by 
checking out other code, reading the online docs and playing 
around. The language seems pretty easy to learn so far and I 
really like the power of it.


The code is on github: https://github.com/bakkdoor/gumbo-d
I've also added it to dub's package repository.

Any feedback is welcome :)

Cheers,
Christopher.

[1] 
http://google-opensource.blogspot.de/2013/08/gumbo-c-library-for-parsing-html.html


Re: DDT 0.7.0 released

2013-08-18 Thread Jacob Carlborg

On 2013-08-17 14:49, Bruno Medeiros wrote:


Someone else had a similar problem, a good guess is that you're running
with a 1.6 JVM, you need a 1.7 JVM.


I did install a 1.7 JVM, although I never verified that it's actually 
1.7 that is used. I'll have to check that.


--
/Jacob Carlborg


Re: Is D the Answer to the One vs. Two Language High ,Performance Computing Dilemma?

2013-08-18 Thread John Joyus

On 08/11/2013 04:22 AM, Walter Bright wrote:

http://elrond.informatik.tu-freiberg.de/papers/WorldComp2012/PDP3426.pdf


This article claims the Performance [of D] is equivalent to C.

Is that true? I mean even if D reaches 90% of C's performance, I still 
consider it great because of its productive features, but are there any 
benchmarks done?




Re: GPGPUs

2013-08-18 Thread luminousone

On Sunday, 18 August 2013 at 05:05:48 UTC, Atash wrote:

On Sunday, 18 August 2013 at 03:55:58 UTC, luminousone wrote:
You do have limited Atomics, but you don't really have any 
sort of complex messages, or anything like that.


I said 'point 11', not 'point 10'. You also dodged points 1 and 
3...


Intel doesn't have a dog in this race, so their is no way to 
know what they plan on doing if anything at all.


http://software.intel.com/en-us/vcsource/tools/opencl-sdk
http://www.intel.com/content/www/us/en/processors/xeon/xeon-phi-detail.html

Just based on those, I'm pretty certain they 'have a dog in 
this race'. The dog happens to be running with MPI and OpenCL 
across a bridge made of PCIe.


The Xeon Phi is interesting in so far as taking generic 
programming to a more parallel environment. However it has some 
serious limitations that will heavily damage its potential 
performance.


AVX2 is completely the wrong path to go about improving 
performance in parallel computing, The SIMD nature of this 
instruction set means that scalar operations, or even just not 
being able to fill the giant 256/512bit register wastes huge 
chunks of this things peak theoretical performance, and if any 
rules apply to instruction pairing on this multi issue pipeline 
you have yet more potential for wasted cycles.


I haven't seen anything about intels, micro thread scheduler, or 
how these chips handle mass context switching natural of micro 
threaded environments, These two items make a huge difference in 
performance, comparing radeon VLIW5/4 to radeon GCN is a good 
example, most of the performance benefit of GCN is from easy of 
scheduling scalar pipelines over more complex pipes with 
instruction pairing rules etc.


Frankly Intel, has some cool stuff, but they have been caught 
with their pants down, they have depended on their large fab 
advantage to carry them over and got lazy.


We likely are watching AMD64 all over again.

The reason to point out HSA, is because it is really easy add 
support for, it is not a giant task like opencl would be. A 
few changes to the front end compiler is all that is needed, 
LLVM's backend does the rest.


H'okay. I can accept that.

OpenCL isn't just a library, it is a language extension, that 
is ran through a preprocessor that compiles the embedded 
__KERNEL and __DEVICE functions, into usable code, and then 
outputs .c/.cpp files for the c compiler to deal with.


But all those extra bits are part of the computing 
*environment*. Is there something wrong with requiring the 
proper environment for an executable?


A more objective question: which devices are you trying to 
target here?


A first, simply a different way of approaching std.parallel like 
functionality, with an eye gpgpu in the future when easy 
integration solutions popup(such as HSA).


Those are all platform specific, they change based on the whim 
and fancy of NVIDIA and AMD with each and every new chip 
released, The size and configuration of CUDA clusters, or 
compute clusters, or EU's, or whatever the hell x chip maker 
feels like using at the moment.


Long term this will all be managed by the underlying support 
software in the video drivers, and operating system kernel. 
Putting any effort into this is a waste of time.


Yes. And the only way to optimize around them is to *know 
them*, otherwise you're pinning the developer down the same way 
OpenMP does. Actually, even worse than the way OpenMP does - at 
least OpenMP lets you set some hints about how many threads you 
want.


It would be best to wait for a more generic software platform, to 
find out how this is handled by the next generation of micro 
threading tools.


The way openCL/CUDA work reminds me to much of someone setting up 
tomcat to have java code generate php that runs on their apache 
server, just because they can. I would rather tighter integration 
with the core language, then having a language in language.



void example( aggregate in float a[] ; key , in float b[], out
  float c[]) {
c[key] = a[key] + b[key];
}

example(a,b,c);

in the function declaration you can think of the aggregate 
basically having the reserve order of the items in a foreach 
statement.


int a[100] = [ ... ];
int b[100];
foreach( v, k ; a ) { b = a[k]; }

int a[100] = [ ... ];
int b[100];

void example2( aggregate in float A[] ; k, out float B[] ) { 
B[k] = A[k]; }


example2(a,b);


Contextually solid. Read my response to the next bit.

I am pretty sure they are simply multiplying the index value 
by the unit size they desire to work on


int a[100] = [ ... ];
int b[100];
void example3( aggregate in range r ; k, in float a[], float 
b[]){

  b[k]   = a[k];
  b[k+1] = a[k+1];
}

example3( 0 .. 50 , a,b);

Then likely they are simply executing multiple __KERNEL 
functions in sequence, would be my guess.


I've implemented this algorithm before in OpenCL already, and 
what you're saying so far doesn't rhyme with what's needed.


There are at least 

Re: GPGPUs

2013-08-18 Thread Atash

On Sunday, 18 August 2013 at 06:22:30 UTC, luminousone wrote:
The Xeon Phi is interesting in so far as taking generic 
programming to a more parallel environment. However it has some 
serious limitations that will heavily damage its potential 
performance.


AVX2 is completely the wrong path to go about improving 
performance in parallel computing, The SIMD nature of this 
instruction set means that scalar operations, or even just not 
being able to fill the giant 256/512bit register wastes huge 
chunks of this things peak theoretical performance, and if any 
rules apply to instruction pairing on this multi issue pipeline 
you have yet more potential for wasted cycles.


I haven't seen anything about intels, micro thread scheduler, 
or how these chips handle mass context switching natural of 
micro threaded environments, These two items make a huge 
difference in performance, comparing radeon VLIW5/4 to radeon 
GCN is a good example, most of the performance benefit of GCN 
is from easy of scheduling scalar pipelines over more complex 
pipes with instruction pairing rules etc.


Frankly Intel, has some cool stuff, but they have been caught 
with their pants down, they have depended on their large fab 
advantage to carry them over and got lazy.


We likely are watching AMD64 all over again.


Well, I can't argue that one.

A first, simply a different way of approaching std.parallel 
like functionality, with an eye gpgpu in the future when easy 
integration solutions popup(such as HSA).


I can't argue with that either.

It would be best to wait for a more generic software platform, 
to find out how this is handled by the next generation of micro 
threading tools.


The way openCL/CUDA work reminds me to much of someone setting 
up tomcat to have java code generate php that runs on their 
apache server, just because they can. I would rather tighter 
integration with the core language, then having a language in 
language.


Fair point. I have my own share of idyllic wants, so I can't 
argue with those.


Low level optimization is a wonderful thing, But I almost 
wonder if this will always be something where in order todo the 
low level optimization you will be using the vendors provided 
platform for doing it, as no generic tool will be able to match 
the custom one.


But OpenCL is by no means a 'custom tool'. CUDA, maybe, but 
OpenCL just doesn't fit the bill in my opinion. I can see it 
being possible in the future that it'd be considered 'low-level', 
but it's a fairly generic solution. A little hackneyed under your 
earlier metaphors, but still a generic, standard solution.


Most of my interaction with the gpu is via shader programs for 
Opengl, I have only lightly used CUDA for some image processing 
software, So I am certainly not the one to give in depth detail 
to optimization strategies.


There was a *lot* of stuff that opened up when vendors dumped 
GPGPU out of Pandora's box. If you want to get a feel for some 
optimization strategies and what they require, check this site 
out: http://www.bealto.com/gpu-sorting_intro.html (and I hope I'm 
not insulting your intelligence here, if I am, I truly apologize).



sorry on point 1, that was a typo, I meant

1. The range must be known prior to execution of a gpu code 
block.


as for

3. Code blocks can only receive a single range, it can however 
be multidimensional


int a[100] = [ ... ];
int b[100];
void example3( aggregate in range r ; k, in float a[], float
  b[]){
  b[k]   = a[k];
}
example3( 0 .. 100 , a,b);

This function would be executed 100 times.

int a[10_000] = [ ... ];
int b[10_000];
void example3( aggregate in range r ; kx,aggregate in range r2 
; ky, in float a[], float

  b[]){
  b[kx+(ky*100)]   = a[kx+(ky*100)];
}
example3( 0 .. 100 , 0 .. 100 , a,b);

this function would be executed 10,000 times. the two aggregate 
ranges being treated as a single 2 dimensional range.


Maybe a better description of the rule would be that multiple 
ranges are multiplicative, and functionally operate as a single 
range.


OH.

I think I was totally misunderstanding you earlier. The 
'aggregate' is the range over the *problem space*, not the values 
being punched into the problem. Is this true or false?


(if true I'm about to feel incredibly sheepish)


Re: std.serialization: pre-voting review / discussion

2013-08-18 Thread David Nadlinger

On Saturday, 17 August 2013 at 08:29:37 UTC, glycerine wrote:

On Wednesday, 14 August 2013 at 13:43:50 UTC, Dicebot wrote:

On Wednesday, 14 August 2013 at 13:28:42 UTC, glycerine wrote:

Wishful thinking aside, they are competitors.


They are not. `std.serialization` does not and should not 
compete in Thrift domain.


Huh? Do you know what thrift does? Summary: Everything that
Orange/std.serialization does and more.


That's actually not true. Thrift does not serialize arbitrary 
object graphs, or any types with indirections, for that matter. 
This is by design, it would be hard to do this efficiently in all 
target languages, and contrary to Orange, performance is the main 
focus of Thrift.



If you
are going to standardize something, standardize the Thrift
bindings so that the compiler doesn't introduce regressions
that break them, like happened from dmd 2.062 - present.


On a related note, we desperately need to do something about 
this, especially since there seems to be an increased amount of 
interest in Thrift lately. For 2.061 and the previous releases, I 
always tested every beta against Thrift, and almost invariably 
found at least one bug/regression per release. However, for 2.062 
and 2.063, I was busy with LDC (and other things) at the time and 
it seems like I forgot to run the tests.


The DMD 2.062+ error message (see 
https://issues.apache.org/jira/browse/THRIFT-2130) doesn't make 
much sense; I guess the best way of going about this would be to 
try to DustMite-reduce the problem first or to fire up DMD in gdb 
to see what exactly is tripping the recursive alias error.


David


Re: std.serialization: pre-voting review / discussion

2013-08-18 Thread David Nadlinger

On Saturday, 17 August 2013 at 11:20:17 UTC, Dicebot wrote:
1) Having bindings in standard library is discouraged, we have 
Deimos for that. There is only curl stuff and it is considered 
a bad solution as far as I am aware of.


The D implementation of Thrift is actually not a binding and does 
not necessarily rely on the Thrift code generator either – all 
the latter does is to generate a D struct definition for the 
types/method parameters in your .thrift file that is then handled 
at D compile-time via reflection. In fact, this even works the 
other way, allowing you to generate .thrift IDL files for 
existing D types. (And yes, in theory the code generator could be 
replaced by ImportExpressions and a CTFE parser.)


David


Re: std.serialization: pre-voting review / discussion

2013-08-18 Thread David Nadlinger

On Saturday, 17 August 2013 at 10:15:34 UTC, BS wrote:
I'd rather that was left for a separate module (or two or 
three) built on top of std.serialization.


In an ideal world, Thrift could maybe be built on 
std.serialization, but in the current form that's not true 
(regardless of e.g. versioning, Orange is likely not fast 
enough), and I am not sure whether this is a desirable goal in 
the first place anyway.


David


Re: GPGPUs

2013-08-18 Thread luminousone

On Sunday, 18 August 2013 at 07:28:02 UTC, Atash wrote:

On Sunday, 18 August 2013 at 06:22:30 UTC, luminousone wrote:
The Xeon Phi is interesting in so far as taking generic 
programming to a more parallel environment. However it has 
some serious limitations that will heavily damage its 
potential performance.


AVX2 is completely the wrong path to go about improving 
performance in parallel computing, The SIMD nature of this 
instruction set means that scalar operations, or even just not 
being able to fill the giant 256/512bit register wastes huge 
chunks of this things peak theoretical performance, and if any 
rules apply to instruction pairing on this multi issue 
pipeline you have yet more potential for wasted cycles.


I haven't seen anything about intels, micro thread scheduler, 
or how these chips handle mass context switching natural of 
micro threaded environments, These two items make a huge 
difference in performance, comparing radeon VLIW5/4 to radeon 
GCN is a good example, most of the performance benefit of GCN 
is from easy of scheduling scalar pipelines over more complex 
pipes with instruction pairing rules etc.


Frankly Intel, has some cool stuff, but they have been caught 
with their pants down, they have depended on their large fab 
advantage to carry them over and got lazy.


We likely are watching AMD64 all over again.


Well, I can't argue that one.

A first, simply a different way of approaching std.parallel 
like functionality, with an eye gpgpu in the future when easy 
integration solutions popup(such as HSA).


I can't argue with that either.

It would be best to wait for a more generic software platform, 
to find out how this is handled by the next generation of 
micro threading tools.


The way openCL/CUDA work reminds me to much of someone setting 
up tomcat to have java code generate php that runs on their 
apache server, just because they can. I would rather tighter 
integration with the core language, then having a language in 
language.


Fair point. I have my own share of idyllic wants, so I can't 
argue with those.


Low level optimization is a wonderful thing, But I almost 
wonder if this will always be something where in order todo 
the low level optimization you will be using the vendors 
provided platform for doing it, as no generic tool will be 
able to match the custom one.


But OpenCL is by no means a 'custom tool'. CUDA, maybe, but 
OpenCL just doesn't fit the bill in my opinion. I can see it 
being possible in the future that it'd be considered 
'low-level', but it's a fairly generic solution. A little 
hackneyed under your earlier metaphors, but still a generic, 
standard solution.


I can agree with that.

Most of my interaction with the gpu is via shader programs for 
Opengl, I have only lightly used CUDA for some image 
processing software, So I am certainly not the one to give in 
depth detail to optimization strategies.


There was a *lot* of stuff that opened up when vendors dumped 
GPGPU out of Pandora's box. If you want to get a feel for some 
optimization strategies and what they require, check this site 
out: http://www.bealto.com/gpu-sorting_intro.html (and I hope 
I'm not insulting your intelligence here, if I am, I truly 
apologize).


I am still learning and additional links to go over never hurt!, 
I am of the opinion that a good programmers has never finished 
learning new stuff.



sorry on point 1, that was a typo, I meant

1. The range must be known prior to execution of a gpu code 
block.


as for

3. Code blocks can only receive a single range, it can however 
be multidimensional


int a[100] = [ ... ];
int b[100];
void example3( aggregate in range r ; k, in float a[], float
 b[]){
 b[k]   = a[k];
}
example3( 0 .. 100 , a,b);

This function would be executed 100 times.

int a[10_000] = [ ... ];
int b[10_000];
void example3( aggregate in range r ; kx,aggregate in range r2 
; ky, in float a[], float

 b[]){
 b[kx+(ky*100)]   = a[kx+(ky*100)];
}
example3( 0 .. 100 , 0 .. 100 , a,b);

this function would be executed 10,000 times. the two 
aggregate ranges being treated as a single 2 dimensional range.


Maybe a better description of the rule would be that multiple 
ranges are multiplicative, and functionally operate as a 
single range.


OH.

I think I was totally misunderstanding you earlier. The 
'aggregate' is the range over the *problem space*, not the 
values being punched into the problem. Is this true or false?


(if true I'm about to feel incredibly sheepish)


Is problem space the correct industry term, I am self taught on 
much of this, so I on occasion I miss out on what the correct 
terminology for something is.


But yes that is what i meant.




Re: GPGPUs

2013-08-18 Thread luminousone
I chose the term aggregate, because it is the term used in the 
description of the foreach syntax.


foreach( value, key ; aggregate )

aggregate being an array or range, it seems to fit as even when 
the aggregate is an array, as you still implicitly have a range 
being 0 .. array.length, and will have a key or index position 
created by the foreach in addition to the value.


A wrapped function could very easily be similar to the intended 
initial outcome


void example( ref float a[], float b[], float c[] ) {

   foreach( v, k ; a ) {
  a[k] = b[k] + c[k];
   }
}

is functionally the same as

void example( aggregate ref float a[] ; k, float b[], float c[] ) 
{

   a[k] = b[k] + c[k];
}

maybe : would make more sense then ; but I am not sure as to the 
best way to represent that index value.


Re: Is D the Answer to the One vs. Two Language High ,Performance Computing Dilemma?

2013-08-18 Thread Russel Winder
On Sun, 2013-08-18 at 01:59 -0400, John Joyus wrote:
 On 08/11/2013 04:22 AM, Walter Bright wrote:
  http://elrond.informatik.tu-freiberg.de/papers/WorldComp2012/PDP3426.pdf
 
 This article claims the Performance [of D] is equivalent to C.
 
 Is that true? I mean even if D reaches 90% of C's performance, I still 
 consider it great because of its productive features, but are there any 
 benchmarks done?

Not a statistically significant benchmark but an interesting data point:

C:

 Sequential
pi = 3.141592653589970752
iteration count = 10
elapse time = 8.623442

C++:

 Sequential
pi = 3.14159265358997075
iteration count = 10
elapse = 8.612123967

D:

 pi_sequential.d
π = 3.141592653589970752
iteration count = 10
elapse time = 8.612256


C and C++ were compiled with GCC 4.8.1 full optimization, D was compiled
with LDC full optimization. Oh go on, let's do it with GDC as well:

 pi_sequential.d
π = 3.141592653589970752
iteration count = 10
elapse time = 8.616558


And you are going to ask about DMD aren't you :-)

 pi_sequential.d
π = 3.141592653589970752
iteration count = 10
elapse time = 9.495549

Remember this is 1 and only 1 data point and not even a sample just a
single data point. Thus only hypothesis building is allowed, no
deductions.  But I begin to believe that D is as fast as C and C++ using
GDC and LDC. DMD is not in the execution performance game.

Further fudging, the code is just one for loop.  The parallel results
are just as encouraging for D.  I will say though that std.concurrency
and std.parallelism could do with some more work. On the other hand C
has nothing like it, whilst C++ has a few options including TBB.

As I say, indicators, not statistically significant results without big
data samples and serious ANOVA.

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Re: std.serialization: pre-voting review / discussion

2013-08-18 Thread ilya-stromberg
On Wednesday, 14 August 2013 at 16:25:21 UTC, Andrei Alexandrescu 
wrote:

On 8/14/13 1:48 AM, Jacob Carlborg wrote:

On 2013-08-14 10:19, Tyler Jameson Little wrote:
  - I would to serialize to a range (file?) and deserialize 
from a

range (file?)


The serialized data is returned as an array, so that is 
compatible with

the range interface, it just won't be lazy.


This seems like a major limitation. (Disclaimer: I haven't read 
the documentation yet.)


Andrei


Shall we fix it before accept the std.serialization?

For example, if I have 10GB of data and 16GB operating memory, I 
can't use std.serialization. It saves all my data into string 
into operating memory, so I haven't got enough memory to save 
data in file. It's currently limited by std.xml.


In other hand, std.serialization can help in many other cases if 
I have enough memory to store copy of my data.


As I can see, we have a few options:
- accept std.serialization as is. If users can't use 
std.serialization due memory limitation, they should find another 
way.
- hold std.serialization until we will have new std.xml module 
with support of range/file input/output. Users should use Orange 
if they need std.serialization right now.
- hold std.serialization until we will have binary archive for 
serialization with support of range/file input/output. Users 
should use Orange if they need std.serialization right now.

- use another xml library, for example from Tango.

Ideas?


GPGPU and D

2013-08-18 Thread Russel Winder
Luminousone, Atash, John,

Thanks for the email exchanges on this, there is a lot of good stuff in
there that needs to be extracted from the mail threads and turned into a
manifesto type document that can be used to drive getting a design and
realization together. The question is what infrastructure would work for
us to collaborate. Perhaps create a GitHub group and a repository to act
as a shared filestore?

I can certainly do that bit of admin and then try and start a document
summarizing the email threads so far, if that is a good way forward on
this.

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Re: std.serialization: pre-voting review / discussion

2013-08-18 Thread Marek Janukowicz
ilya-stromberg wrote:
 The serialized data is returned as an array, so that is
 compatible with
 the range interface, it just won't be lazy.

 This seems like a major limitation. (Disclaimer: I haven't read
 the documentation yet.)

 Andrei
 
 Shall we fix it before accept the std.serialization?
 
 For example, if I have 10GB of data and 16GB operating memory, I
 can't use std.serialization. It saves all my data into string
 into operating memory, so I haven't got enough memory to save
 data in file. It's currently limited by std.xml.
 
 In other hand, std.serialization can help in many other cases if
 I have enough memory to store copy of my data.
 
 As I can see, we have a few options:
 - accept std.serialization as is. If users can't use
 std.serialization due memory limitation, they should find another
 way.
 - hold std.serialization until we will have new std.xml module
 with support of range/file input/output. Users should use Orange
 if they need std.serialization right now.
 - hold std.serialization until we will have binary archive for
 serialization with support of range/file input/output. Users
 should use Orange if they need std.serialization right now.
 - use another xml library, for example from Tango.

My opinion is - accept it as it is (if it's not completely broken). I 
recently needed some way to serialize a data structure (in order by save the 
state of the app and restore it later) and was quite disappointed there is 
nothing like that in Phobos. Although XML is not necessarily well suited to 
my particular use case, it's still better than nothing.

Binary archive would be a great plus, but allow me to point out that current 
state of affairs (std.serialization being in a pre-accepted state for a long 
time AFAIK) is probably the worst state we might have - on the one hand I 
would not use third party libs, because std.serialization is just around the 
corner, on the other I don't have std.serialization distributed with the 
compiler yet. Also binary archive is an extension, not a change, so I don't 
see any reason why it could not be added later (because it would be backward 
compatible).

-- 
Marek Janukowicz


Any cryptographically secure pseudo-random number generator (CSPRNG) for D?

2013-08-18 Thread ilya-stromberg

Hi,

Do you know any cryptographically secure pseudo-random number 
generator (CSPRNG) for D?


I know that we have std.random, but it is NOT cryptographically 
secure.


Thanks.


Re: Is D the Answer to the One vs. Two Language High ,Performance Computing Dilemma?

2013-08-18 Thread Jeff Nowakowski

On 08/18/2013 01:59 AM, John Joyus wrote:

On 08/11/2013 04:22 AM, Walter Bright wrote:

http://elrond.informatik.tu-freiberg.de/papers/WorldComp2012/PDP3426.pdf


This article claims the Performance [of D] is equivalent to C.

Is that true? I mean even if D reaches 90% of C's performance, I still
consider it great because of its productive features, but are there any
benchmarks done?


That claim is highly dubious. D's garbage collector is a known 
performance bottleneck. I read the paper and didn't see any benchmarks. 
It was mostly about how they interfaced with a C library. Yes, in 
limited circumstances if you write D like you would write C, you can get 
comparative performance.


However, the point of D is to allow high-level coding and to make use of 
garbage collector by default, so that's where the interesting 
comparisons are to be made.


Re: blocks with attributes vs inlined lambda

2013-08-18 Thread monarch_dodra

On Tuesday, 18 June 2013 at 07:58:06 UTC, Kenji Hara wrote:
Inlining should remove performance penalty. Nobody holds the 
immediately
called lambda, so it should be treated as a 'scope delegate'. 
For that, we

would need to add a section in language spec to support it.


Kenji:

I've been doing some benchmarks recently: Using an inlined 
lambda seems to really kill performance, both with or without 
-inline (tested with both dmd and gdc).


However, using a named function, and then immediately calling it, 
there is 0 performance penalty (both w/ and w/o -inline).


Is this a bug? Can it be fixed? Should I file and ER?


Re: Experiments with emscripten and D

2013-08-18 Thread Gambler
On 8/18/2013 12:52 AM, deadalnix wrote:
 On Saturday, 17 August 2013 at 16:43:14 UTC, Piotr Szturmaj wrote:
 What happens when you forget a semicolon or a comma? Or make some
 typos? It silently breaks. I don't care if there are tools to help
 with it. It's still a mess. Did you see WAT
 (https://www.destroyallsoftware.com/talks/wat) ? If not, please do.
 Here are some examples of JS additions which WAT demonstrated:

 [] + [] // yields empty string
 [] + {} // [object Object]
 {} + [] // 0
 {} + {} // NaN

 Seriously, WTF!

 
 I could explain, but I'm not sure you are interested. The ambiguity on
 that one come from {} being either an object literal or a block statement.

This is what slays me about JavaScritpt community. There is always
someone who says I can explain! and starts posting type conversion
trivia. Which is irrelevant, because no one cares *how* exactly this
happens. What's important is *why* and there are no graceful
explanations for that. JavaScript's type system is a bloody mess.

[0, -1, -2].sort() returns [-1, -2, 0].

[1, 2, 3, 4, 5, 6, 7, 8, 9].map(parseInt) returns [1,
NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN].

2147483648 | 0 returns -2147483648.

But I'm not even sure that's the worst part of the language. I think the
absolute worst aspect of JS is that it's missing huge chunks of *basic*
functionality, so everyone invents their own design patterns by
abusing the hell of what's available.

Actual example from StackOverflow with 145 upvotes:

var CONFIG = (function() {
 var private = {
 'MY_CONST': '1',
 };

 return {
get: function(name) { return private[name]; }
};
})();
alert(CONFIG.get('MY_CONST')); //I invented constants!!!

So in the end every project ends up using their own little set of ugly
hacks that's both hard to read and differs from whatever everyone else
is doing.

 I feel pretty confident I can do a wat speech for D.

Not with such succinct, hilarious, obviously broken examples, I bet.

 Obviously, you can't share every possible code. It's about sharing the
 cross cutting parts. Imagine an application with a server, web
 frontend and desktop frontend. With shared codebases, you could write
 one client code. Just use the same abstractions for the web and for
 the desktop. Then you can create similar interfaces from the same
 codebase. It's a huge win. Duplicating efforts is tiresome, error
 prone and obviously takes time.

 
 It has already been done, but didn't caught up. Actually I can't even
 find theses project in google, so I guess ti was a compete fail.
 
 It worked, but cause many problem, especially securitywise.



Re: Experiments with emscripten and D

2013-08-18 Thread John Colvin

On Sunday, 18 August 2013 at 04:52:16 UTC, deadalnix wrote:

I feel pretty confident I can do a wat speech for D.


I can't think of many wats. There are questionable design 
decisions, annoying things that look like they should compile but 
don't (local variable alias to non-global template errors 
anyone?) and performance pitfalls, but nothing so gloriously 
broken as what javascript achieves.


Coming from a C(++) perspective has prevented a lot of complete 
gibbering insanity from occurring. In particular, having a basic 
respect for data-layout at the machine level and an ethos of not 
doing very costly implicit type type-conversions has staved off a 
whole horde of crazy.


Re: Experiments with emscripten and D

2013-08-18 Thread Gambler
On 8/17/2013 5:16 PM, John Colvin wrote:
 On Saturday, 17 August 2013 at 20:58:09 UTC, Dicebot wrote:
 On Saturday, 17 August 2013 at 20:42:33 UTC, H. S. Teoh wrote:
 And you'd have to sandbox the code since arbitrary D code running wild
 on the user's computer is a Bad Thing(tm). Which runs into GC-related
 issues when your client is a low-memory handheld device. Though arguably
 this would still be an improvement over JS, since an interpreted
 language necessarily uses more resources.

 You are getting pretty close to NaCl idea :)
 
 Yeah, I was thinking that :p
 
 NaCl seems designed mostly as native extensions for the html/js/css world.
 
 I was thinking bigger: The browser as a (transient, if appropriate)
 application delivery system.

IMO, fixing the web and creating a simple/secure delivery/sandboxing
system for native apps are two different tasks. Both are very much
needed, but the solutions are unlikely to overlap. MS Research is
working on several projects for app delivery. Joanna Rutkowska has her
Cubes project, which sounds very, very interesting. None of those will
fix the problems Web is facing, though. (Aside from taking away the need
for thick-client-implemented-in-JS applications.)


Re: Experiments with emscripten and D

2013-08-18 Thread deadalnix

On Sunday, 18 August 2013 at 11:21:52 UTC, John Colvin wrote:

On Sunday, 18 August 2013 at 04:52:16 UTC, deadalnix wrote:

I feel pretty confident I can do a wat speech for D.


I can't think of many wats. There are questionable design 
decisions, annoying things that look like they should compile 
but don't (local variable alias to non-global template errors 
anyone?) and performance pitfalls, but nothing so gloriously 
broken as what javascript achieves.




I know many of them. That is probably because I'm working on SDC, 
so I have to dig into dark corner of the language, nevertheless, 
it is really possible to do.


In fact I planned to do it, but Denis Korskin warned me about the 
bad image it would create of D, and I have to admit he was right, 
so I abandoned the idea.


Coming from a C(++) perspective has prevented a lot of complete 
gibbering insanity from occurring. In particular, having a 
basic respect for data-layout at the machine level and an ethos 
of not doing very costly implicit type type-conversions has 
staved off a whole horde of crazy.


I'm not here for nothing :D


Re: Experiments with emscripten and D

2013-08-18 Thread John Colvin

On Sunday, 18 August 2013 at 12:48:49 UTC, deadalnix wrote:

On Sunday, 18 August 2013 at 11:21:52 UTC, John Colvin wrote:

On Sunday, 18 August 2013 at 04:52:16 UTC, deadalnix wrote:

I feel pretty confident I can do a wat speech for D.


I can't think of many wats. There are questionable design 
decisions, annoying things that look like they should compile 
but don't (local variable alias to non-global template errors 
anyone?) and performance pitfalls, but nothing so gloriously 
broken as what javascript achieves.




I know many of them. That is probably because I'm working on 
SDC, so I have to dig into dark corner of the language, 
nevertheless, it is really possible to do.


In fact I planned to do it, but Denis Korskin warned me about 
the bad image it would create of D, and I have to admit he was 
right, so I abandoned the idea.


I presume they are at least documented somewhere? A page on the 
wiki? I'd be interested to see them myself. Who knows, maybe some 
of them are fixable?


Coming from a C(++) perspective has prevented a lot of 
complete gibbering insanity from occurring. In particular, 
having a basic respect for data-layout at the machine level 
and an ethos of not doing very costly implicit type 
type-conversions has staved off a whole horde of crazy.


I'm not here for nothing :D


good point :)


Re: Possible codegen bug when using Tcl/Tk (related to DMC and DMD)

2013-08-18 Thread Andrej Mitrovic
On 8/18/13, yaz yazan.dab...@gmail.com wrote:
 I think this is the same issue as
 https://github.com/aldacron/Derelict3/issues/143

I remember seeing that! I also tried -L/SUBSYSTEM:WINDOWS at first
(without the number), but it didnt' make a difference. However using
-L/SUBSYSTEM:WINDOWS:5.01 actually fixes the issue. Thanks!

But I would still like to know why this behavior happens without this
flag. Does anyone know?


Re: std.serialization: pre-voting review / discussion

2013-08-18 Thread Andrej Mitrovic
On 8/18/13, Marek Janukowicz ma...@janukowicz.net wrote:
 I recently needed some way to serialize a data structure (in order by save the
 state of the app and restore it later) and was quite disappointed there is
 nothing like that in Phobos.

FWIW you could try out msgpack-d: https://github.com/msgpack/msgpack-d#usage

It's a very tiny and a fast library.


Re: std.serialization: pre-voting review / discussion

2013-08-18 Thread Tobias Pankrath

On Sunday, 18 August 2013 at 08:38:53 UTC, ilya-stromberg wrote:

As I can see, we have a few options:
- accept std.serialization as is. If users can't use 
std.serialization due memory limitation, they should find 
another way.
- hold std.serialization until we will have new std.xml module 
with support of range/file input/output. Users should use 
Orange if they need std.serialization right now.
- hold std.serialization until we will have binary archive for 
serialization with support of range/file input/output. Users 
should use Orange if they need std.serialization right now.

- use another xml library, for example from Tango.

Ideas?


We should add a suitable range interface, even if it makes no 
sense with current std.xml and include std.serialization now. For 
many use cases it will be sufficient and the improvements can 
come when std.xml2 comes. Holding back std.serialization will 
only mean that we won't see any new backend from users and would 
be quite unfair to Jacob and may keep off other contributors.


Re: std.serialization: pre-voting review / discussion

2013-08-18 Thread Andrej Mitrovic
On 8/18/13, David Nadlinger c...@klickverbot.at wrote:
 On Saturday, 17 August 2013 at 08:29:37 UTC, glycerine wrote:
 If you
 are going to standardize something, standardize the Thrift
 bindings so that the compiler doesn't introduce regressions
 that break them, like happened from dmd 2.062 - present.

 On a related note, we desperately need to do something about
 this, especially since there seems to be an increased amount of
 interest in Thrift lately. For 2.061 and the previous releases, I
 always tested every beta against Thrift, and almost invariably
 found at least one bug/regression per release. However, for 2.062
 and 2.063, I was busy with LDC (and other things) at the time and
 it seems like I forgot to run the tests.

I think it would be good if we added Thrift and other test-cases, for
example from the D Templates Book, to the test machines. But since
there's a lot of code maybe the test machines should run the tests
sporadically (e.g. after every #N new commits), otherwise pull
requests would take forever to test.

Alternatively we could at least try to test these major projects with
release candidates Normally the project maintainers would do this
themselves, but it's easy to run out of time or just to forget to test
things, and then it's too late (well we have fixup DMD releases now so
it's not too bad).


windows .lib files (dmc has them, dmd doesn't)

2013-08-18 Thread Adam D. Ruppe

Can we get some more .lib files with the dmd distribution?

Specifically, I'd really like to have opengl32.lib and glu32.lib 
included. My copy of dmc has them, but my dmd doesn't. Together 
they are only 43 K; I say that's well worth adding to the dmd zip.



On a tangential note, if we do ever decide to break up the zip 
into windows, linux, etc., I've said before that I'm meh on this 
but could live with it as long as the folder layouts remained the 
same.


But I actually see a potential benefit to it now: a separate 
dmd-windows.zip could use the space saved by ditching linux 
binaries to bring in more Windows stuff, like these .lib files, 
more win32 headers, the resource compiler, import library, etc., 
to save people from having to grab the basic utilities package or 
dmc separately to do these quite common Windows programming tasks.


Re: windows .lib files (dmc has them, dmd doesn't)

2013-08-18 Thread Andrej Mitrovic

On Sunday, 18 August 2013 at 14:51:52 UTC, Adam D. Ruppe wrote:

Can we get some more .lib files with the dmd distribution?


And also update the old ones:
http://d.puremagic.com/issues/show_bug.cgi?id=6625


Re: std.serialization: pre-voting review / discussion

2013-08-18 Thread Tyler Jameson Little

On Sunday, 18 August 2013 at 14:24:38 UTC, Tobias Pankrath wrote:

On Sunday, 18 August 2013 at 08:38:53 UTC, ilya-stromberg wrote:

As I can see, we have a few options:
- accept std.serialization as is. If users can't use 
std.serialization due memory limitation, they should find 
another way.
- hold std.serialization until we will have new std.xml module 
with support of range/file input/output. Users should use 
Orange if they need std.serialization right now.
- hold std.serialization until we will have binary archive for 
serialization with support of range/file input/output. Users 
should use Orange if they need std.serialization right now.

- use another xml library, for example from Tango.

Ideas?


We should add a suitable range interface, even if it makes no 
sense with current std.xml and include std.serialization now. 
For many use cases it will be sufficient and the improvements 
can come when std.xml2 comes. Holding back std.serialization 
will only mean that we won't see any new backend from users and 
would be quite unfair to Jacob and may keep off other 
contributors.


I completely agree.

I'm the one that brought it up, and I mostly brought it up so the 
API doesn't have to change once std.xml is fixed. I don't think 
changing the return type to a range will be too difficult or 
memory expensive.


Also, since slices *are* ranges, shouldn't this just work?


When compiling multiple source files

2013-08-18 Thread ProgrammingGhost

How does the compiler do static typing of multiple source files?
I heard D malloc memory and doesn't free to speed up compilation
but I am guessing every instance doesn't compile just one source
file? My question is if I have a function in this file and
another in a different file what does the compiler do when both
files needs to know the definition of another? Also how does it
handle modules?

  From another thing I heard text parsing can be ridiculously fast
so there may be no need for a binary representation of each file
parsed. Does the D compiler read all source files into memory
generate the AST then starts compiling each file? I know there
more than one compiler but I wouldn't mind hearing from either or
both if they differ.


Re: Static unittests?

2013-08-18 Thread Borislav Kosharov
On Saturday, 17 August 2013 at 17:48:04 UTC, Andrej Mitrovic 
wrote:

On 8/17/13, Borislav Kosharov boby_...@abv.bg wrote:
I really think that this should be added to the language, 
because
it doesn't break stuff and it is useful. And the 'static' 
keyword

is already used in many places like module imports and ifs.


Have you tried using the new getUnitTests trait in the git-head
version? If not it will be in the 2.064 release.

Btw such a 'static unittest' feature is certainly going to 
break code

because static can be applied as a label, for example:

class C
{
static:
void foo() { }

unittest { /* Test the foo method. */ }   // suddenly 
evaluated at

compile-time
}


Oh I really haven't tough about that. Maybe, I will try this new 
trait. Or another solution is to add a compiler switch that will 
try to execute all the tests during compilation or something.


Re: std.serialization: pre-voting review / discussion

2013-08-18 Thread David Nadlinger

On Sunday, 18 August 2013 at 14:52:04 UTC, Andrej Mitrovic wrote:

Normally the project maintainers would do this
themselves, but it's easy to run out of time or just to forget 
to test
things, and then it's too late (well we have fixup DMD releases 
now so it's not too bad).


The big problem with this right now is that quite frequently, you 
run the tests and discover one regression in the beta, file it, 
fix it (or wait for it to get fixed), then run the tests again, 
discover that they still don't pass, etc.


This is not only an annoying and time-intensive job for the 
maintainer of the project (as during beta you have to pretty much 
always be on your toes for a new version to test lest Walter 
decide to make the final release), but this also increases beta 
duration.


One obvious reaction to this (as a project maintainer) would be 
to continuously track Git master and report regressions as they 
arise. However, this is also not always practical, as quite 
often, there is a regression/backwards-incompatible change early 
on in the development process that is not fixed until much later, 
so that multiple issues can still pile up unnoticed.


Having a system that regularly, automatically runs the test 
suites of several larger, well-known D projects with the results 
being readily available to the DMD/druntime/Phobos teams would 
certainly help. But it's also not ideal, since if a project 
starts to fail, the exact nature of the issue (regression in DMD 
or bug in the project, and if the former, a minimal test case) 
can often be hard to track down for somebody not already familiar 
with the code base.


David


Re: Static unittests?

2013-08-18 Thread monarch_dodra
On Sunday, 18 August 2013 at 16:15:30 UTC, Borislav Kosharov 
wrote:
On Saturday, 17 August 2013 at 17:48:04 UTC, Andrej Mitrovic 
wrote:

On 8/17/13, Borislav Kosharov boby_...@abv.bg wrote:
I really think that this should be added to the language, 
because
it doesn't break stuff and it is useful. And the 'static' 
keyword

is already used in many places like module imports and ifs.


Have you tried using the new getUnitTests trait in the git-head
version? If not it will be in the 2.064 release.

Btw such a 'static unittest' feature is certainly going to 
break code

because static can be applied as a label, for example:

class C
{
static:
   void foo() { }

   unittest { /* Test the foo method. */ }   // suddenly 
evaluated at

compile-time
}


Oh I really haven't tough about that. Maybe, I will try this 
new trait. Or another solution is to add a compiler switch that 
will try to execute all the tests during compilation or 
something.


Well, that assumes all your code is ctfe-able...

I've taken to doing something along the line of:

unittest
{
void dg()
{
BODY OF UNITTEST
}
dg(); //test runtime
assertCTFEable!dg; //test compiletime
}

It's a quick and easy way of testing both code paths, with 
minimal duplication and hassle.


Re: windows .lib files (dmc has them, dmd doesn't)

2013-08-18 Thread evilrat

On Sunday, 18 August 2013 at 14:51:52 UTC, Adam D. Ruppe wrote:

Can we get some more .lib files with the dmd distribution?

Specifically, I'd really like to have opengl32.lib and 
glu32.lib included. My copy of dmc has them, but my dmd 
doesn't. Together they are only 43 K; I say that's well worth 
adding to the dmd zip.


maybe also coffimplib too? not everyone knows about its
existence, also placing a short note about it in readme will be
good for beginners.


Re: Is D the Answer to the One vs. Two Language High ,Performance Computing Dilemma?

2013-08-18 Thread Nick Sabalausky
On Sun, 18 Aug 2013 06:35:30 -0400
Jeff Nowakowski j...@dilacero.org wrote:

 On 08/18/2013 01:59 AM, John Joyus wrote:
  On 08/11/2013 04:22 AM, Walter Bright wrote:
  http://elrond.informatik.tu-freiberg.de/papers/WorldComp2012/PDP3426.pdf
 
  This article claims the Performance [of D] is equivalent to C.
 
  Is that true? I mean even if D reaches 90% of C's performance, I
  still consider it great because of its productive features, but are
  there any benchmarks done?
 
 That claim is highly dubious. D's garbage collector is a known 
 performance bottleneck. 

Well, but aside from that one thing, people need to keep in mind that D
is not dynamic/interpreted/VMed, and does have full low-level
capabilities. Those are the things that make C fast, and D shares them.
Plus modern C codegen also makes things fast, but D generally uses C
backends, so again D shares that, too.



Re: Is D the Answer to the One vs. Two Language High ,Performance Computing Dilemma?

2013-08-18 Thread ProgrammingGhost
On Sunday, 11 August 2013 at 18:25:02 UTC, Andrei Alexandrescu 
wrote:
For a column of text to be readable it should have not much 
more than 10 words per line. Going beyond that forces eyes to 
scan too jerkily and causes difficulty in following line breaks.


This.
Also some people can read a line a second because they read 
downward instead of left to right. Although I heard this through 
hearsay


Re: Is D the Answer to the One vs. Two Language High ,Performance Computing Dilemma?

2013-08-18 Thread Iain Buclaw
On 18 August 2013 18:24, ProgrammingGhost
dsioafiseghvfawklncfskz...@sdifjsdiovgfdisjcisj.com wrote:
 On Sunday, 11 August 2013 at 18:25:02 UTC, Andrei Alexandrescu wrote:

 For a column of text to be readable it should have not much more than 10
 words per line. Going beyond that forces eyes to scan too jerkily and causes
 difficulty in following line breaks.


 This.
 Also some people can read a line a second because they read downward instead
 of left to right. Although I heard this through hearsay

Probably more like two lines at once, if they are reading a book.
Reading code? I reckon you can read downwards on that. :)

-- 
Iain Buclaw

*(p  e ? p++ : p) = (c  0x0f) + '0';


Re: GPGPU and D

2013-08-18 Thread John Colvin

On Sunday, 18 August 2013 at 08:40:33 UTC, Russel Winder wrote:

Luminousone, Atash, John,

Thanks for the email exchanges on this, there is a lot of good 
stuff in
there that needs to be extracted from the mail threads and 
turned into a
manifesto type document that can be used to drive getting a 
design and
realization together. The question is what infrastructure would 
work for
us to collaborate. Perhaps create a GitHub group and a 
repository to act

as a shared filestore?

I can certainly do that bit of admin and then try and start a 
document
summarizing the email threads so far, if that is a good way 
forward on

this.


A github group could be a good idea, for sure. A simple wiki page 
with some sketched out goals would be good too, which I guess 
would draw on the content of the previous thread.


Anyway, I can't really get too involved right now, my masters 
thesis is due in a terrifyingly small amount of time.
However, come September and onwards I could definitely spend some 
serious time on this. If everything goes to plan I might well be 
able to justify working on such a project as a part of my PhD.


Re: Is D the Answer to the One vs. Two Language High ,Performance Computing Dilemma?

2013-08-18 Thread Dicebot
Yes, in limited circumstances if you write D like you would 
write C, you can get comparative performance.


I'd say in all cases when you mimic C behavior in D one should 
expect same or better performance with ldc/gdc unless you hit a 
bug.


Re: std.serialization: pre-voting review / discussion

2013-08-18 Thread Dicebot

On Sunday, 18 August 2013 at 16:33:51 UTC, David Nadlinger wrote:

...


Please, don't move too far from review topic ;) It is a separate 
issue to discuss.


Re: GPGPUs

2013-08-18 Thread Dejan Lekic

On Tuesday, 13 August 2013 at 18:21:12 UTC, eles wrote:

On Tuesday, 13 August 2013 at 16:27:46 UTC, Russel Winder wrote:
The entry point would be if D had a way of creating GPGPU 
kernels that

is better than the current C/C++ + tooling.


You mean an alternative to OpenCL language?

Because, I imagine, a library (libopencl) would be easy enough 
to write/bind.


Who'll gonna standardize this language?


Tra55er did it long ago - look at the cl4d wrapper. I think it is 
on GitHub.


Re: Is D the Answer to the One vs. Two Language High ,Performance Computing Dilemma?

2013-08-18 Thread John Colvin

On Sunday, 18 August 2013 at 18:08:58 UTC, Dicebot wrote:
Yes, in limited circumstances if you write D like you would 
write C, you can get comparative performance.


I'd say in all cases when you mimic C behavior in D one should 
expect same or better performance with ldc/gdc unless you hit a 
bug.


array literal allocations. I guess that's debatably a performance 
bug.


Re: GPGPUs

2013-08-18 Thread John Colvin

On Sunday, 18 August 2013 at 18:19:06 UTC, Dejan Lekic wrote:

On Tuesday, 13 August 2013 at 18:21:12 UTC, eles wrote:
On Tuesday, 13 August 2013 at 16:27:46 UTC, Russel Winder 
wrote:
The entry point would be if D had a way of creating GPGPU 
kernels that

is better than the current C/C++ + tooling.


You mean an alternative to OpenCL language?

Because, I imagine, a library (libopencl) would be easy enough 
to write/bind.


Who'll gonna standardize this language?


Tra55er did it long ago - look at the cl4d wrapper. I think it 
is on GitHub.


I had no idea that existed. Thanks :) 
https://github.com/Trass3r/cl4d


Re: std.serialization: pre-voting review / discussion

2013-08-18 Thread Dicebot

On Monday, 12 August 2013 at 13:27:45 UTC, Dicebot wrote:
Stepping up to act as a Review Manager for Jacob Carlborg 
std.serialization


 Input 

Code: 
https://github.com/jacob-carlborg/phobos/tree/serialization


Documentation: 
https://dl.dropboxusercontent.com/u/18386187/docs/std.serialization/index.html


Previous review thread: 
http://forum.dlang.org/thread/adyanbsdsxsfdpvoo...@forum.dlang.org


 Changes since last review 

- Sources has been integrated into Phobos source tree
- DDOC documentation has been provided in a form it should look 
like on dlang.org
- Most utility functions/template code depends on have been 
inlined. Remaining `package` utility modules:

* std.serialization.archives.xmldocument
* std.serialization.attribute
* std.serialization.registerwrapper

 Information for reviewers 

Goal of this thread is to detect if there are any outstanding 
issues that need to fixed before formal yes/no voting 
happens. If no critical objections will arise, voting will 
begin starting with a next week.


Please take this seriously: If you identify problems along the 
way, please note if they are minor, serious, or showstoppers. 
(http://wiki.dlang.org/Review/Process). This information later 
will be used to determine if library is ready for voting.


If there are any frequent Phobos contributors / core developers 
please pay extra attention to submission code style and fitting 
into overall Phobos guidelines and structure.


-

Let the thread begin.

Jacob, it is probably worth creating a pull request with latest 
rebased version of your proposal to simplify getting a quick 
overview of changes. Also please tell if there is anything you 
want/need to implement before merging.


OK, time to make a short summary.

There have been mentioned several issues / improvement 
possibilities. I don't think they prevent voting and it is up to 
Jacob to decide what he want to incorporate from it.


However, there are two things that do matter in my opinion - 
pre-UDA part of API and uncertainty about range-based lazy 
approach. Important thing here is that while library can be 
included with plenty of features lacking we can't really afford 
to break its API only few releases later just to add/remove these 
features.


So as a review manager, I think voting should be delayed until 
API is ready to address lazy range-based work model. No actual 
implementation is required but


1) it should be possible to do it later without breaking user code
2) library should not make an assumption about implementation 
being lazy or eager


That is my understanding based on current knowledge of Phobos 
modules, please correct me if I am wrong.


Jacob, please tell if you have any objections or, if this 
decision sounds reasonable - just contact me via e-mail when you 
will find std.serialization suitable for final voting. I think it 
is pretty clear that package itself is considered useful and 
welcome to Phobos.


Re: GPGPU and D

2013-08-18 Thread Russel Winder
On Sun, 2013-08-18 at 19:46 +0200, John Colvin wrote:

 A github group could be a good idea, for sure. A simple wiki page 
 with some sketched out goals would be good too, which I guess 
 would draw on the content of the previous thread.

If I remember correctly in order to make a GitHub group you have to make
a user with an email address and convert it to a group.  I can set up a
temporary mail list on my SMTP server for this so no problem. The real
problem is what to call the group and the project. Anyone any ideas?

 Anyway, I can't really get too involved right now, my masters 
 thesis is due in a terrifyingly small amount of time.
 However, come September and onwards I could definitely spend some 
 serious time on this. If everything goes to plan I might well be 
 able to justify working on such a project as a part of my PhD.

I too have not as much time to actually code on this as I would like in
the short term, but it is better to actually do little bits than nothing
at all. So having the infrastructure in place is an aid for little
things to happen to keep the momentum going. Albeit a small
momentum. :-)

Good luck with the thesis writing.  What is the topic? Which university?

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Re: Is D the Answer to the One vs. Two Language High ,Performance Computing Dilemma?

2013-08-18 Thread Timon Gehr

On 08/18/2013 08:26 PM, John Colvin wrote:

On Sunday, 18 August 2013 at 18:08:58 UTC, Dicebot wrote:

Yes, in limited circumstances if you write D like you would write C,
you can get comparative performance.


I'd say in all cases when you mimic C behavior in D one should expect
same or better performance with ldc/gdc unless you hit a bug.


array literal allocations. I guess that's debatably a performance bug.


I guess that's debatably mimicking C behaviour.


Re: Is D the Answer to the One vs. Two Language High ,Performance Computing Dilemma?

2013-08-18 Thread Dicebot

On Sunday, 18 August 2013 at 18:31:17 UTC, Timon Gehr wrote:

I guess that's debatably mimicking C behaviour.


What behavior do you refer to?


Re: GPGPUs

2013-08-18 Thread Russel Winder
On Sun, 2013-08-18 at 20:27 +0200, John Colvin wrote:
 On Sunday, 18 August 2013 at 18:19:06 UTC, Dejan Lekic wrote:
  On Tuesday, 13 August 2013 at 18:21:12 UTC, eles wrote:
  On Tuesday, 13 August 2013 at 16:27:46 UTC, Russel Winder 
  wrote:
  The entry point would be if D had a way of creating GPGPU 
  kernels that
  is better than the current C/C++ + tooling.
 
  You mean an alternative to OpenCL language?
 
  Because, I imagine, a library (libopencl) would be easy enough 
  to write/bind.
 
  Who'll gonna standardize this language?
 
  Tra55er did it long ago - look at the cl4d wrapper. I think it 
  is on GitHub.

Thanks for pointing this out, I had completely missed it.

 I had no idea that existed. Thanks :) 
 https://github.com/Trass3r/cl4d

I had missed that as well. Bad Google and GitHub skills on my part
clearly.

I think the path is now obvious, ask if the owner will turn this
repository over to a group so that it can become the focus of future
work via the repositories wiki and issue tracker.

I will fork this repository as is and begin to analyse the status quo
wrt the discussion recently on the email list.

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Re: Is D the Answer to the One vs. Two Language High ,Performance Computing Dilemma?

2013-08-18 Thread Dicebot

On Sunday, 18 August 2013 at 18:26:13 UTC, John Colvin wrote:

On Sunday, 18 August 2013 at 18:08:58 UTC, Dicebot wrote:
Yes, in limited circumstances if you write D like you would 
write C, you can get comparative performance.


I'd say in all cases when you mimic C behavior in D one should 
expect same or better performance with ldc/gdc unless you hit 
a bug.


array literal allocations. I guess that's debatably a 
performance bug.


I have said C behavior, not C syntax. That is the main 
problem with comparing _language_ performance - stick to same 
semantics and you are likely to get same performance but it may 
require quite inconvenient coding style (i.e. working around 
array literals is a huge pain). So probably it makes more sense 
to compare idiomatic code. But where are the limits?


It is a bit easier with vm-based languages because performance of 
vm implementation itself does matter and can be compared. 
Compiled languages with same backend? No idea how to benchmark 
those properly.


When people expect to get a performance gain from simply using 
certain language, it just can't end good.


Re: GPGPUs

2013-08-18 Thread Atash
I'm not sure if 'problem space' is the industry standard term (in 
fact I doubt it), but it's certainly a term I've used over the 
years by taking a leaf out of math books and whatever my 
professors had touted. :-D I wish I knew what the standard term 
was, but for now I'm latching onto that because it seems to 
describe at a high implementation-agnostic level what's up, and 
in my personal experience most people seem to 'get it' when I use 
the term - it has empirically had an accurate connotation.


That all said, I'd like to know what the actual term is, too. -.-'

On Sunday, 18 August 2013 at 08:21:18 UTC, luminousone wrote:
I chose the term aggregate, because it is the term used in the 
description of the foreach syntax.


foreach( value, key ; aggregate )

aggregate being an array or range, it seems to fit as even when 
the aggregate is an array, as you still implicitly have a range 
being 0 .. array.length, and will have a key or index 
position created by the foreach in addition to the value.


A wrapped function could very easily be similar to the intended 
initial outcome


void example( ref float a[], float b[], float c[] ) {

   foreach( v, k ; a ) {
  a[k] = b[k] + c[k];
   }
}

is functionally the same as

void example( aggregate ref float a[] ; k, float b[], float c[] 
) {

   a[k] = b[k] + c[k];
}

maybe : would make more sense then ; but I am not sure as to 
the best way to represent that index value.


Aye, that makes awesome sense, but I'm left wishing that there 
was something in that syntax to support access to local/shared 
memory between work-items. Or, better yet, some way of hinting at 
desired amounts of memory in the various levels of the non-global 
memory hierarchy and a way of accessing those requested 
allocations.


I mean, I haven't *seen* anyone do anything device-wise with more 
hierarchical levels than just global-shared-private, but it's 
always bothered me that in OpenCL we could only specify memory 
allocations on those three levels. What if someone dropped in 
another hierarchical level? Suddenly it'd open another door to 
optimization of code, and there'd be no way for OpenCL to access 
it. Or what if someone scrapped local memory altogether, for 
whatever reason? The industry may never add/remove such memory 
levels, but, still, it just feels... kinda wrong that OpenCL 
doesn't have an immediate way of saying, A'ight, it's cool that 
you have this, Mr. XYZ-compute-device, I can deal with it, 
before proceeding to put on sunglasses and driving away in a 
Ferrari. Or something like that.


Re: windows .lib files (dmc has them, dmd doesn't)

2013-08-18 Thread Adam D. Ruppe

On Sunday, 18 August 2013 at 17:11:53 UTC, evilrat wrote:

maybe also coffimplib too?


Yeah, I'd be for it, though coffimplib is part of the paid 
extended utility package so Walter might not be as keen on 
putting it in the free download. Though I would guess that having 
a comprehensive Windows D development package would be worth it.


Re: GPGPU and D

2013-08-18 Thread luminousone

On Sunday, 18 August 2013 at 08:40:33 UTC, Russel Winder wrote:

Luminousone, Atash, John,

Thanks for the email exchanges on this, there is a lot of good 
stuff in
there that needs to be extracted from the mail threads and 
turned into a
manifesto type document that can be used to drive getting a 
design and
realization together. The question is what infrastructure would 
work for
us to collaborate. Perhaps create a GitHub group and a 
repository to act

as a shared filestore?

I can certainly do that bit of admin and then try and start a 
document
summarizing the email threads so far, if that is a good way 
forward on

this.


Github seems fine to me, my coding skills are likely more limited 
then Atash or John; I am currently working as a student 
programmer at Utah's Weber State University while also attending 
as a part time student, I am currently working on finishing the 
last couple credit hours of the assoc degree in CS, I would like 
to work towards a higher level degree or even eventually a Phd in 
CS.


I will help in whatever way I can however.


Re: Any cryptographically secure pseudo-random number generator (CSPRNG) for D?

2013-08-18 Thread QAston

On Sunday, 18 August 2013 at 10:14:29 UTC, ilya-stromberg wrote:

Hi,

Do you know any cryptographically secure pseudo-random number 
generator (CSPRNG) for D?


I know that we have std.random, but it is NOT cryptographically 
secure.


Thanks.


You may be interested in 
https://github.com/D-Programming-Deimos/openssl - D bindings for 
openssl.


Re: windows .lib files (dmc has them, dmd doesn't)

2013-08-18 Thread Nick Sabalausky
On Sun, 18 Aug 2013 20:59:17 +0200
Adam D. Ruppe destructiona...@gmail.com wrote:

 On Sunday, 18 August 2013 at 17:11:53 UTC, evilrat wrote:
  maybe also coffimplib too?
 
 Yeah, I'd be for it, though coffimplib is part of the paid 
 extended utility package so Walter might not be as keen on 
 putting it in the free download. Though I would guess that having 
 a comprehensive Windows D development package would be worth it.

Or at least implib which is part of the free basic utilities package.


Re: std.serialization: pre-voting review / discussion

2013-08-18 Thread Jacob Carlborg

On 2013-08-18 10:38, ilya-stromberg wrote:


- use another xml library, for example from Tango.


The XML module from Tango excepts the content being in memory as well, 
at least the Document module.


--
/Jacob Carlborg


Re: std.serialization: pre-voting review / discussion

2013-08-18 Thread Jacob Carlborg

On 2013-08-17 10:29, glycerine wrote:


Huh? Do you know what thrift does? Summary: Everything that
Orange/std.serialization does and more. To the point: Thrift
provides data versioning, std.serialization does not. In my book:
end of story, game over. Thrift is preffered choice. If you
are going to standardize something, standardize the Thrift
bindings so that the compiler doesn't introduce regressions
that break them, like happened from dmd 2.062 - present.


Orange/std.serialization is capable of serializing more types than 
Thrift is. Example it will correctly serialize and deserialize slices, 
pointers and so on.


It's easy to implement versioning yourself, something like:

class Foo
{
int version_;
int a;
int b;

void toData (Serializer serializer, Serializer.Data key)
{
serializer.serialize(a, a);
serializer.serialize(version_, version_);

if (version_ == 2)
serializer.serialize(b, b);
}

// Do the corresponding in fromData.
}

If versioning is crucial it can be added.

--
/Jacob Carlborg


Re: windows .lib files (dmc has them, dmd doesn't)

2013-08-18 Thread Adam D. Ruppe

On Sunday, 18 August 2013 at 14:51:52 UTC, Adam D. Ruppe wrote:
Specifically, I'd really like to have opengl32.lib and 
glu32.lib included.


in related news, my simpledisplay.d now supports the creation of 
OpenGL windows on both Windows and Linux/X11. But it is opt-in on 
Windows since without these .lib files, it won't successfully 
link, and I don't want it to fail compiling with a stock dmd.


I'll be pushing this to github later tonight.


Re: std.serialization: pre-voting review / discussion

2013-08-18 Thread Marek Janukowicz
Andrej Mitrovic wrote:
 I recently needed some way to serialize a data structure (in order by
 save the state of the app and restore it later) and was quite
 disappointed there is nothing like that in Phobos.
 
 FWIW you could try out msgpack-d:
 https://github.com/msgpack/msgpack-d#usage
 
 It's a very tiny and a fast library.

That's what I ended up using, but I would be much more happy to have 
something like this in Phobos.

-- 
Marek Janukowicz


Re: std.serialization: pre-voting review / discussion

2013-08-18 Thread Jonathan M Davis
On Sunday, August 18, 2013 21:45:59 Jacob Carlborg wrote:
 If versioning is crucial it can be added.

I don't know if it's crucial or not, but I know that the Java guys didn't have 
it initially but ended up adding it later, which would imply that they ran 
into problems that made them decide that it should be there. I'd certainly be 
inclined to think that it's better to have it, and it's probably easier to add 
it before it's merged than later. But I don't know how crucial it is.

- Jonathan M Davis


Re: std.serialization: pre-voting review / discussion

2013-08-18 Thread Walter Bright

On 8/18/2013 11:26 AM, Dicebot wrote:

So as a review manager, I think voting should be delayed until API is ready to
address lazy range-based work model.


I agree. Ranges are a very big deal for D, and libraries that can conceivably 
support it must do so.


Re: Static unittests?

2013-08-18 Thread Walter Bright

On 8/5/2013 11:27 AM, monarch_dodra wrote:

What about, for example:

assertCTFEable!({
 int i = 5;
 string s;
 while (i--)
 s ~= 'a';
 assert(s == a);
});


I don't believe that is a valid use case because the code being tested is not 
accessible from anything other than the test.




Re: GPGPU and D

2013-08-18 Thread John Colvin

On Sunday, 18 August 2013 at 18:30:29 UTC, Russel Winder wrote:

On Sun, 2013-08-18 at 19:46 +0200, John Colvin wrote:

A github group could be a good idea, for sure. A simple wiki 
page with some sketched out goals would be good too, which I 
guess would draw on the content of the previous thread.


If I remember correctly in order to make a GitHub group you 
have to make
a user with an email address and convert it to a group.  I can 
set up a
temporary mail list on my SMTP server for this so no problem. 
The real
problem is what to call the group and the project. Anyone any 
ideas?


Anyway, I can't really get too involved right now, my masters 
thesis is due in a terrifyingly small amount of time.
However, come September and onwards I could definitely spend 
some serious time on this. If everything goes to plan I might 
well be able to justify working on such a project as a part of 
my PhD.


I too have not as much time to actually code on this as I would 
like in
the short term, but it is better to actually do little bits 
than nothing
at all. So having the infrastructure in place is an aid for 
little

things to happen to keep the momentum going. Albeit a small
momentum. :-)

Good luck with the thesis writing.  What is the topic? Which 
university?


I always have a bad time explaining it haha, here's the title: 
Automated tracing of divergent ridges in tokamak magnetic spectra.


Basically, the fusion guys at culham produce load of spectrograms 
and have very little systematic workflow for analysing them. It's 
almost all done done by eye. I've developed a new ridge tracing 
algorithm and applied it to the spectra, with then some extra 
steps to identify particular magnetic events that occur in the 
reactors. It's all a bit ad-hoc, but it'll do for a masters by 
research.


I'm at Univeristy of Warwick, Engineering department (coming from 
a physics BSc). I'll be joint physics and engineering for the 
PhD, continuing (read reinventing-from-scratch) the same work.


There are so much data with so much heavy duty processing that a 
GPU solution will probably be a good choice. We have a HPC 
cluster with some GPU compute nodes* so for me, being able to 
target them efficiently - both in runtime and developer-time 
terms - would be great. Much more interesting that just spamming 
the data proc nodes, anyway! I would have to persuade the the 
sysadmins to install gdc/ldc though...


*(6 nodes, each with 2 NVIDIA Tesla M2050 GPUs, 48 GB RAM and 2 
Intel Xeon X5650s)


Re: Any cryptographically secure pseudo-random number generator (CSPRNG) for D?

2013-08-18 Thread Walter Bright

On 8/18/2013 12:32 PM, QAston wrote:

On Sunday, 18 August 2013 at 10:14:29 UTC, ilya-stromberg wrote:

Hi,

Do you know any cryptographically secure pseudo-random number generator
(CSPRNG) for D?

I know that we have std.random, but it is NOT cryptographically secure.

Thanks.


You may be interested in https://github.com/D-Programming-Deimos/openssl - D
bindings for openssl.


I agree. I'd call a C one from D that is accepted by the crypto community as 
secure, rather than invent an insecure one.


Re: Experiments with emscripten and D

2013-08-18 Thread H. S. Teoh
On Sun, Aug 18, 2013 at 06:52:14AM +0200, deadalnix wrote:
[...]
 I feel pretty confident I can do a wat speech for D.
[...]

Please do, I'm curious to hear it. :)

I can't think of any major WATs in D that come from the language itself.
Compiler bugs, OTOH, often elicits a wat?! from me, one recent example
being this gem from a friend:

import std.algorithm, std.stdio;
void main()
{
int[] arr = [3,4,2,1];

bool less(int a, int b)
{ return a  b; }
bool greater(int a, int b)
{ return a  b; }

{
auto dg = less;
sort!(dg)(arr);
}
//this dg *should* be different from older destroyed dg
//rename to dg2 and everything works as expected
auto dg = greater;
sort!(dg)(arr);

writeln(arr);
//outputs [1,2,3,4] instead of [4,3,2,1]
//change dg variable name in either place and it works
//also, if you don't do the 1st sort, it works correctly even 
with same variable name
}


http://d.puremagic.com/issues/show_bug.cgi?id=10619


T

-- 
We've all heard that a million monkeys banging on a million typewriters will 
eventually reproduce the entire works of Shakespeare.  Now, thanks to the 
Internet, we know this is not true. -- Robert Wilensk


Re: blocks with attributes vs inlined lambda

2013-08-18 Thread Kenji Hara
2013/8/18 monarch_dodra monarchdo...@gmail.com

 On Tuesday, 18 June 2013 at 07:58:06 UTC, Kenji Hara wrote:

 Inlining should remove performance penalty. Nobody holds the immediately
 called lambda, so it should be treated as a 'scope delegate'. For that, we
 would need to add a section in language spec to support it.


 Kenji:

 I've been doing some benchmarks recently: Using an inlined lambda seems
 to really kill performance, both with or without -inline (tested with
 both dmd and gdc).

 However, using a named function, and then immediately calling it, there is
 0 performance penalty (both w/ and w/o -inline).

 Is this a bug? Can it be fixed? Should I file and ER?


I opened a new bugzilla issue:
http://d.puremagic.com/issues/show_bug.cgi?id=10848

And start working for the compiler and doc fix:
https://github.com/D-Programming-Language/dlang.org/pull/372
https://github.com/D-Programming-Language/dmd/pull/2483

Kenji Hara


Actor model D

2013-08-18 Thread Luís.Marques
Can anyone please explain me what it means for the D language to 
follow the Actor model, as the relevant Wikipedia page says it 
does? [1]


[1] 
http://en.wikipedia.org/wiki/Actor_model#Later_Actor_programming_languages


Re: Experiments with emscripten and D

2013-08-18 Thread Daniel Murphy
H. S. Teoh hst...@quickfur.ath.cx wrote in message 
news:mailman.186.1376878962.1719.digitalmar...@puremagic.com...
 On Sun, Aug 18, 2013 at 06:52:14AM +0200, deadalnix wrote:
 [...]
 I feel pretty confident I can do a wat speech for D.
 [...]

 Please do, I'm curious to hear it. :)


import std.algorithm : filter, map;

void main()
{
foreach(c; map!a(hello)) // Identity map function
static assert(c.sizeof == 4); // Passes

foreach(c; filter!true(hello)) // Pass-all filter
static assert(c.sizeof == 4); // Passes

foreach(c; hello)
static assert(c.sizeof == 4); // Fails
}


WAT.




Re: DMD under 64-bit Windows 7 HOWTO

2013-08-18 Thread Yongwei Wu
I recently install DMD, and encountered this page while Googling. 
It gave me some hints, but my changes to make it work on 64-bit 
Windows 7 + MSVC 2012 is really much less drastic. After adding 
C:\dmd2\windows\bin to PATH, I only need to edit the LIB line in 
sc.ini to the following effect:


LIB=%VCINSTALLDIR%lib\amd64;%WindowsSdkDir%lib\win8\um\x64;%@P%\..\lib

I can then launch a VS2012 Developer Command Prompt to use either 
-m32 and -m64 modes. For the normal command prompt, -m32 works, 
but -m64 does not. I do not feel it a problem for me at all.


Re: Is D the Answer to the One vs. Two Language High ,Performance Computing Dilemma?

2013-08-18 Thread H. S. Teoh
On Sun, Aug 18, 2013 at 09:26:45AM +0100, Russel Winder wrote:
 On Sun, 2013-08-18 at 01:59 -0400, John Joyus wrote:
  On 08/11/2013 04:22 AM, Walter Bright wrote:
   http://elrond.informatik.tu-freiberg.de/papers/WorldComp2012/PDP3426.pdf
  
  This article claims the Performance [of D] is equivalent to C.
  
  Is that true? I mean even if D reaches 90% of C's performance, I
  still consider it great because of its productive features, but are
  there any benchmarks done?
 
 Not a statistically significant benchmark but an interesting data
 point:
 
 C:
 
  Sequential
   pi = 3.141592653589970752
   iteration count = 10
   elapse time = 8.623442
 
 C++:
 
  Sequential
   pi = 3.14159265358997075
   iteration count = 10
   elapse = 8.612123967
 
 D:
 
  pi_sequential.d
   π = 3.141592653589970752
   iteration count = 10
   elapse time = 8.612256
 
 
 C and C++ were compiled with GCC 4.8.1 full optimization, D was compiled
 with LDC full optimization. Oh go on, let's do it with GDC as well:
 
  pi_sequential.d
   π = 3.141592653589970752
   iteration count = 10
   elapse time = 8.616558
 
 
 And you are going to ask about DMD aren't you :-)
 
  pi_sequential.d
   π = 3.141592653589970752
   iteration count = 10
   elapse time = 9.495549
 
 Remember this is 1 and only 1 data point and not even a sample just a
 single data point. Thus only hypothesis building is allowed, no
 deductions.  But I begin to believe that D is as fast as C and C++
 using GDC and LDC. DMD is not in the execution performance game.
[...]

This may be merely only a single isolated data point, but it certainly
matches my experience with GDC / DMD. I find that gdc -O3 consistently
produces code that outperforms code produced by dmd -O -inline -release.

As for comparison with C/C++, I haven't really tested it myself so I
can't say. But I *will* say that it's far easier to write casual code
(i.e., not hand-tuned for performance) in D that has similar performance
to the C/C++ equivalent.


T

-- 
Microsoft is to operating systems  security ... what McDonalds is to gourmet 
cooking.


Re: Actor model D

2013-08-18 Thread Tyler Jameson Little

On Monday, 19 August 2013 at 03:11:00 UTC, Luís Marques wrote:
Can anyone please explain me what it means for the D language 
to follow the Actor model, as the relevant Wikipedia page says 
it does? [1]


[1] 
http://en.wikipedia.org/wiki/Actor_model#Later_Actor_programming_languages


I assume this refers to task in std.parallelism and the various 
bits in std.concurrency for message passing.


I'm very surprised that D made the cut but Go didn't. I'm even 
more surprised that Rust was included even though it's not even 
1.0 yet while Go is at 1.1.1 currently.


I wish they had some kind of explanation or code examples to 
justify each one as in other articles, because I'm also very 
interested...


Language and library reference pages very slow to load

2013-08-18 Thread finalpatch
Apparently the javascript that's responsible for creating 
hyperlinks runs very slowly, usually several seconds or longer.  
eg. http://dlang.org/phobos/core_memory.html is so slow it causes 
Mozilla Firefox to pop up the page not responding box.  I have 
also tried Internet Explorer 10 on Windows 7 and Safari on Mac OS 
X 10.8.4 and got similar results.


I wonder if it's possible to move this to the server side given 
the documents are mostly static contents.


Re: Language and library reference pages very slow to load

2013-08-18 Thread H. S. Teoh
On Mon, Aug 19, 2013 at 06:42:04AM +0200, finalpatch wrote:
 Apparently the javascript that's responsible for creating hyperlinks
 runs very slowly, usually several seconds or longer.  eg.
 http://dlang.org/phobos/core_memory.html is so slow it causes
 Mozilla Firefox to pop up the page not responding box.  I have also
 tried Internet Explorer 10 on Windows 7 and Safari on Mac OS X
 10.8.4 and got similar results.
 
 I wonder if it's possible to move this to the server side given the
 documents are mostly static contents.

My guess is that this is caused either by hyphenate.js or
hyphenate-selectively.js, both of which, thankfully, will be going away
once dlang.org is updated (their removal has already been merged into
git HEAD).

In the meantime, one option is to disable JS on dlang.org (that's what I
do, and it makes the site much more usable).


T

-- 
Uhh, I'm still not here. -- KD, while away on ICQ.


Re: how to get enclosing function as symbol ? (eg: __function__.stringof ==__FUNCTION__)

2013-08-18 Thread Nicolas Sicard

On Sunday, 18 August 2013 at 02:50:32 UTC, JS wrote:

On Sunday, 18 August 2013 at 01:52:50 UTC, Timothee Cour wrote:

Is there any way to get the enclosing function as symbol ?

I'd like something like that:
alternative names would be:
__function__
__context__


auto fun(alias caller=__function__)(){
 //caller represents fun1!double
 return ReturnType!caller.init;
}

T fun1(T)(T x){
 assert(__function__.stringof==__FUNCTION__);
 alias fun=__function__;
 assert( is(ReturnType! __function__) == T);
 return fun();
}
void main(){fun1!double();}



use a string mixin?


I thought this would work but it doesn't:
---
void foo(T)()
{
bar!__FUNCTION__();
}

void bar(string Caller)()
{
mixin(alias caller =  ~ Caller ~ ;);
}

void main()
{
foo!double();
}
---

It works if foo isn't a template, though. The problem when foo is 
a template is that foo!double.foo seems to be an illegal 
construct for the compiler...


how do I get the ith field of a std.typecons.Tuple ?

2013-08-18 Thread Timothee Cour
A)
how do I get the ith field of a std.typecons.Tuple ?
ideally, it should be as simple as:

auto t=Tuple!(int,name,double,name2)(1);
static assert(t.fields[0] == name);

It seems the necessary items are private, so how do I get the ith field of
a std.typecons.Tuple ?
I really don't want to parse T.stringof, which could require a full parser
(eg: with Tuple!(A!bar,name) )

B)
Related question:
Why isn't slicing and indexing allowed for the Tuple type?
eg:
alias T=typeof(t);
static assert(is(T[0] == int));
static assert(is(T[0..1] == Tuple!(int,name));

C)
Same with appending:
static assert(is(T[0..1]~T[1..2] == T));

D)
I'm trying to make an append operator but because of problem (A) above I
can't (for named Tuples).


Re: how do I get the ith field of a std.typecons.Tuple ?

2013-08-18 Thread John Colvin

On Sunday, 18 August 2013 at 08:46:17 UTC, Timothee Cour wrote:

A)
how do I get the ith field of a std.typecons.Tuple ?
ideally, it should be as simple as:

auto t=Tuple!(int,name,double,name2)(1);
static assert(t.fields[0] == name);


field is the old name for expand, retained for compatibility, 
it's not recommended. It gives you direct access to the tuple 
inside the Tuple struct, but it's for getting the variable 
values, not their names.


If you want to get the *names* you've chosen for the tuple 
fields, you'll have to use traits of some sort I think.


Re: how do I get the ith field of a std.typecons.Tuple ?

2013-08-18 Thread Timothee Cour
On Sun, Aug 18, 2013 at 2:15 AM, John Colvin john.loughran.col...@gmail.com
 wrote:

 On Sunday, 18 August 2013 at 08:46:17 UTC, Timothee Cour wrote:

 A)
 how do I get the ith field of a std.typecons.Tuple ?
 ideally, it should be as simple as:

 auto t=Tuple!(int,name,double,**name2)(1);
 static assert(t.fields[0] == name);


 field is the old name for expand, retained for compatibility, it's not
 recommended. It gives you direct access to the tuple inside the Tuple
 struct, but it's for getting the variable values, not their names.


I didn't mean Tuple.field (as in Tuple.expand), I really meant the name of
the corresponding entry, as shown in my example.

If you want to get the *names* you've chosen for the tuple fields, you'll
 have to use traits of some sort I think.


I don't see how that would work, however I've figured out how to do it:

That's a bit of a hack, but should work. Should it be included in phobos,
or, better, shall we fix Tuple with some of the recommendations i gave
above?

import std.typecons;
auto tupleField(T,size_t i)()if(isTuple!T  iT.length){
  enum foo0=typeof(T.init.slice!(i,i+1)).stringof;
  static assert(foo0[$-2..$]==`)`);//otherwise not a tuple with fields
  enum foo=typeof(T.init.slice!(i,i+1)).stringof[0..$-2];
  size_t j=foo.length;
  while(true){
char fj=foo[--j];
if(fj=='')
  return foo[j+1..$];
  }
}
unittest{
  import std.typecons;
  auto t=Tuple!(int,foo,double,bar)(2,3.4);
  alias T=typeof(t);
  static assert(tupleField!(T,0)==foo);
  static assert(tupleField!(T,1)==bar);
}



Re: scoped imports

2013-08-18 Thread Joseph Rushton Wakeling

On Sunday, 18 August 2013 at 01:33:51 UTC, Timothee Cour wrote:
that's not DRY: in my use case, a group of functions use 
certain imports,
it would be annoying and not DRY to do that. What I suggest 
(allowing {}
grouping at module scope) seems simple and intuitive; any 
reason it can't be done?


How do you ensure that the imports are limited to your {} scope, 
but your functions are not, without a finnicky special case for 
how scopes work?




Re: how do I get the ith field of a std.typecons.Tuple ?

2013-08-18 Thread Timothee Cour
and this:
auto tupleFields(T)()if(isTuple!T){
  string[T.length]ret;
  foreach(i;Iota!(T.length))
ret[i]=tupleField!(T,i);
  return ret;
}
unittest{
  import std.typecons;
  auto t=Tuple!(int,foo,double,bar)(2,3.4);
  alias T=typeof(t);
  static assert(tupleFields!T==[foo,bar]);
}



On Sun, Aug 18, 2013 at 2:26 AM, Timothee Cour thelastmamm...@gmail.comwrote:




 On Sun, Aug 18, 2013 at 2:15 AM, John Colvin 
 john.loughran.col...@gmail.com wrote:

 On Sunday, 18 August 2013 at 08:46:17 UTC, Timothee Cour wrote:

 A)
 how do I get the ith field of a std.typecons.Tuple ?
 ideally, it should be as simple as:

 auto t=Tuple!(int,name,double,**name2)(1);
 static assert(t.fields[0] == name);


 field is the old name for expand, retained for compatibility, it's not
 recommended. It gives you direct access to the tuple inside the Tuple
 struct, but it's for getting the variable values, not their names.


 I didn't mean Tuple.field (as in Tuple.expand), I really meant the name of
 the corresponding entry, as shown in my example.

 If you want to get the *names* you've chosen for the tuple fields, you'll
 have to use traits of some sort I think.


 I don't see how that would work, however I've figured out how to do it:

 That's a bit of a hack, but should work. Should it be included in phobos,
 or, better, shall we fix Tuple with some of the recommendations i gave
 above?
 
 import std.typecons;
 auto tupleField(T,size_t i)()if(isTuple!T  iT.length){
   enum foo0=typeof(T.init.slice!(i,i+1)).stringof;
   static assert(foo0[$-2..$]==`)`);//otherwise not a tuple with fields
   enum foo=typeof(T.init.slice!(i,i+1)).stringof[0..$-2];
   size_t j=foo.length;
   while(true){
 char fj=foo[--j];
 if(fj=='')
   return foo[j+1..$];
   }
 }
 unittest{
   import std.typecons;
   auto t=Tuple!(int,foo,double,bar)(2,3.4);
   alias T=typeof(t);
   static assert(tupleField!(T,0)==foo);
   static assert(tupleField!(T,1)==bar);
 }
 





Re: scoped imports

2013-08-18 Thread Timothee Cour
On Sun, Aug 18, 2013 at 2:31 AM, Joseph Rushton Wakeling 
joseph.wakel...@webdrake.net wrote:

 On Sunday, 18 August 2013 at 01:33:51 UTC, Timothee Cour wrote:

 that's not DRY: in my use case, a group of functions use certain imports,
 it would be annoying and not DRY to do that. What I suggest (allowing {}
 grouping at module scope) seems simple and intuitive; any reason it can't
 be done?


 How do you ensure that the imports are limited to your {} scope, but your
 functions are not, without a finnicky special case for how scopes work?


granted, that's not ideal. How about the other points I mentioned?
void fun(){
version=A;
version(none):
}


Re: Getting core.exception.OutOfMemoryError error on allocating large arrays

2013-08-18 Thread zorran

on my machine (core i7, 16 gb ram, win7/64)
next code written core.exception.OutOfMemoryError:
enum long size= 1300_000_000;
auto arr = new byte[size];

but next code work fine:
enum long size= 1300_000_000;
byte * p = cast(byte *) malloc(size);

i compiled in 64 bit mode
i use keys: dmc -c -m64 test.d


Re: Getting core.exception.OutOfMemoryError error on allocating large arrays

2013-08-18 Thread John Colvin

On Sunday, 18 August 2013 at 12:07:02 UTC, zorran wrote:

on my machine (core i7, 16 gb ram, win7/64)
next code written core.exception.OutOfMemoryError:
enum long size= 1300_000_000;
auto arr = new byte[size];

but next code work fine:
enum long size= 1300_000_000;
byte * p = cast(byte *) malloc(size);

i compiled in 64 bit mode
i use keys: dmc -c -m64 test.d


Interesting... What happens if you use core.memory.GC.malloc?


Re: Getting core.exception.OutOfMemoryError error on allocating large arrays

2013-08-18 Thread zorran



Interesting... What happens if you use core.memory.GC.malloc?


enum long size= 1300_000_000;   
byte * p = cast(byte *) malloc(size);

for(int i=0; isize; i++)
p[i]=1;

ulong sum=0;
for(int i=0; isize; i++)
   sum += p[i]; 

writef(%d , sum); // here written 13



Re: Getting core.exception.OutOfMemoryError error on allocating large arrays

2013-08-18 Thread zorran

On Sunday, 18 August 2013 at 12:40:42 UTC, zorran wrote:



Interesting... What happens if you use core.memory.GC.malloc?


i am using in sample import std.c.stdlib;
GC.malloc also written core.exception.OutOfMemoryError


Re: Getting core.exception.OutOfMemoryError error on allocating large arrays

2013-08-18 Thread John Colvin

On Sunday, 18 August 2013 at 12:40:42 UTC, zorran wrote:



Interesting... What happens if you use core.memory.GC.malloc?


enum long size= 1300_000_000;   
byte * p = cast(byte *) malloc(size);

for(int i=0; isize; i++)
p[i]=1;

ulong sum=0;
for(int i=0; isize; i++)
   sum += p[i]; 

writef(%d , sum); // here written 13



Well that proves malloc is actually allocating the memory.

I'd say file a bug report. This should definitely work.


Win32: How to get the stack trace when compiling with a windows subsystem?

2013-08-18 Thread Andrej Mitrovic
When you compile with -L/SUBSYSTEM:WINDOWS you're essentially building
an app without a console, so if you want to print out messages you'd
have to log them to a file. For example:

-
import core.sys.windows.windows;
import std.stdio;

extern(Windows) HWND GetConsoleWindow();

void main()
{
if (!GetConsoleWindow())
{
stdout.open(r.\stdout.log, w);
stderr.open(r.\stderr.log, w);
}

try
{
realMain();
}
catch (Throwable thr)
{
stderr.writeln(thr.msg);
throw thr;
}
}

void realMain()
{
assert(0);
}
-

Compilable and runnable with:

dmd -g -L/SUBSYSTEM:WINDOWS:5.01 -run test.d

This will write the exception message to a log file, however I'd also
like to retrieve the stack trace to log that as well. For example
here's the output in a regular console app (without the windows
subsystem):

core.exception.AssertError@test(29): Assertion failure

0x0041D93B in onAssertError
0x004020DD in void test.realMain() at C:\dev\code\d_code\test.d(30)
0x0040208B in _Dmain at C:\dev\code\d_code\test.d(18)
0x00417BE8 in extern (C) int rt.dmain2._d_run_main(int, char**, extern (C
) int function(char[][])*).void runMain()
0x00417C78 in extern (C) int rt.dmain2._d_run_main(int, char**, extern (C
) int function(char[][])*).void runAll()
0x00417555 in _d_run_main
0x00408690 in main
0x0042DD29 in mainCRTStartup
0x774F33CA in BaseThreadInitThunk
0x77D09ED2 in RtlInitializeExceptionChain
0x77D09EA5 in RtlInitializeExceptionChain

I'd like to log this out when building with a windows subsystem. Is
there any way to do that?


Re: Win32: How to get the stack trace when compiling with a windows subsystem?

2013-08-18 Thread Andrej Mitrovic
On 8/18/13, Andrej Mitrovic andrej.mitrov...@gmail.com wrote:
 if (!GetConsoleWindow())

Actually it would be even better if I could create a console window
when building with subsystem:windows, for debugging purposes. I'll
have a look at MSDN on ways to do this, unless someone already knows
this and posts it here.

Normally I'd just compile in console mode to begin with, but I'm
trying to avoid some GUI-related bugs when an app isn't built with
subsystem windows
(http://forum.dlang.org/thread/caj85nxbnx+8uo5beo1k9q-j0qpovo74ft4laeeksaqjtmul...@mail.gmail.com#post-mpzehfgvzuzspvfoxsxw:40forum.dlang.org).


Re: Win32: How to get the stack trace when compiling with a windows subsystem?

2013-08-18 Thread Adam D. Ruppe

On Sunday, 18 August 2013 at 14:10:07 UTC, Andrej Mitrovic wrote:
Actually it would be even better if I could create a console 
window when building with subsystem:windows, for debugging 
purposes.



extern(Windows) void AllocConsole(); // not sure if that's the 
perfect signature but it works


void main() {
   debug AllocConsole();
   throw new Exception(test);
}


The problem is the console will close before you can actually 
read it! But it is there and the exception text did appear in it 
so that's something.




static / global operator overload

2013-08-18 Thread Namespace
I can't find anything so I ask here: what was the decision to 
disallow static or global operator overloads?
In C++ you can declare operator overloads inside and outside of 
classes (the latter is more popular), so why wasn't this 
introduced in D also?


Thanks in advance. :)


Re: Win32: How to get the stack trace when compiling with a windows subsystem?

2013-08-18 Thread Andrej Mitrovic
On 8/18/13, Adam D. Ruppe destructiona...@gmail.com wrote:
 extern(Windows) void AllocConsole(); // not sure if that's the
 perfect signature but it works

 void main() {
 debug AllocConsole();
 throw new Exception(test);
 }


 The problem is the console will close before you can actually
 read it! But it is there and the exception text did appear in it
 so that's something.

Thanks. However I've found a solution but also a new problem. The
'info' field of a Throwable can be converted to a string, so I can
output this into a log file.

But, the info field is always null in a module constructor:

-
import std.stdio;

import core.sys.windows.windows;
extern(Windows) HWND GetConsoleWindow();

shared static this()
{
stderr.open(r.\stderr.log, w);

try
{
assert(0);
}
catch (Throwable thr)
{
stderr.writefln(thr.info: %s, thr.info);
}
}

void main() { }
-

$ dmd -g -L/SUBSYSTEM:WINDOWS:5.01 -run test.d  type stderr.log
$ thr.info: null

If I copy-paste the code from the module ctor into main then thr.info
has the proper stack trace information.

Should I be calling some runtime initialization functions in the
module ctor so the stack traces work there?


Re: scoped imports

2013-08-18 Thread Joseph Rushton Wakeling

On Sunday, 18 August 2013 at 09:52:29 UTC, Timothee Cour wrote:

On Sun, Aug 18, 2013 at 2:31 AM, Joseph Rushton Wakeling 
joseph.wakel...@webdrake.net wrote:


On Sunday, 18 August 2013 at 01:33:51 UTC, Timothee Cour wrote:
granted, that's not ideal. How about the other points I 
mentioned?

void fun(){
version=A;
version(none):
}


Not sure I understand what you're trying to achieve there. But as 
an alternative to function-local import, why not split your 
module into a package, with submodules mymodule.bardependent and 
mymodule.nonbardependent ... ?


Re: static / global operator overload

2013-08-18 Thread Ali Çehreli

On 08/18/2013 07:34 AM, Namespace wrote:

 In C++ you can declare operator overloads inside and outside of classes
 (the latter is more popular)

The latter is popular because a global operator takes advantage of 
implicit type conversions. A global operator+ allows using an int even 
on the left-hand side of the operator.


As a result, assuming that MyInt can be constructed from an int, when 
there is


// Assume this is defined outside of MyInt definition
MyInt operator+(MyInt lhs, MyInt rhs);

the expression

1 + MyInt(2)

is lowered to

operator+(MyInt(1), MyInt(2))

 so why wasn't this introduced in D also?

My guess is that there is no implicit construction of objects in D 
anyway so there wouldn't be that benefit.


Ali



Re: static / global operator overload

2013-08-18 Thread monarch_dodra

On Sunday, 18 August 2013 at 15:29:26 UTC, Ali Çehreli wrote:

On 08/18/2013 07:34 AM, Namespace wrote:

 In C++ you can declare operator overloads inside and outside
of classes
 (the latter is more popular)

The latter is popular because a global operator takes advantage 
of implicit type conversions. A global operator+ allows using 
an int even on the left-hand side of the operator.


As a result, assuming that MyInt can be constructed from an 
int, when there is


// Assume this is defined outside of MyInt definition
MyInt operator+(MyInt lhs, MyInt rhs);

the expression

1 + MyInt(2)

is lowered to

operator+(MyInt(1), MyInt(2))

 so why wasn't this introduced in D also?

My guess is that there is no implicit construction of objects 
in D anyway so there wouldn't be that benefit.


Ali


D defines the member opBinaryRight, which makes global 
operators un-needed.


//
struct S
{
void opBinary(alias s)(int i)
if (s == +)
{
writeln(s);
}
void opBinaryRight(alias s)(int i)
if (s == +)
{
return this + i;
}
}

void main()
{
S s;
s + 5;
5 + s;
}
//

Doing this also helps avoid poluting the global name-space with 
operators.


Re: scoped imports

2013-08-18 Thread monarch_dodra
On Sunday, 18 August 2013 at 14:52:04 UTC, Joseph Rushton 
Wakeling wrote:

On Sunday, 18 August 2013 at 09:52:29 UTC, Timothee Cour wrote:

On Sun, Aug 18, 2013 at 2:31 AM, Joseph Rushton Wakeling 
joseph.wakel...@webdrake.net wrote:

On Sunday, 18 August 2013 at 01:33:51 UTC, Timothee Cour 
wrote:
granted, that's not ideal. How about the other points I 
mentioned?

void fun(){
version=A;
version(none):
}


Not sure I understand what you're trying to achieve there. But 
as an alternative to function-local import, why not split your 
module into a package, with submodules mymodule.bardependent 
and mymodule.nonbardependent ... ?


Related: is it possible to pack several modules/submodules in a 
single file? Or do you necessarily have to split them up?


Re: Win32: How to get the stack trace when compiling with a windows subsystem?

2013-08-18 Thread Nick Sabalausky
On Sun, 18 Aug 2013 16:07:20 +0200
Andrej Mitrovic andrej.mitrov...@gmail.com wrote:

 catch (Throwable thr)
 {
 stderr.writeln(thr.msg);

stderr.writeln(thr.msg); // No trace
stderr.writeln(thr); // Includes trace

However, I'm guessing it probably doesn't solve the other problem:

 The
 'info' field of a Throwable can be converted to a string, so I can
 output this into a log file.
 
 But, the info field is always null in a module constructor:


Re: how do I get the ith field of a std.typecons.Tuple ?

2013-08-18 Thread Dicebot
Looking at Tuple implementation, this information gets lost at 
template instatiation time. I think it is worth a pull request to 
store properly ordered tuple of field aliases in Tuple type.


  1   2   >