pragma(mangle) only works on functions

2015-08-16 Thread Freddy via Digitalmars-d
I felt this was important enough to big over to the D general 
chat.
Original 
Thread:http://forum.dlang.org/thread/zmmsodqrffvcdqidv...@forum.dlang.org


BBasile's Example:

On Monday, 17 August 2015 at 04:32:47 UTC, BBasile wrote:

On Monday, 17 August 2015 at 02:46:02 UTC, Freddy wrote:

I can't get pragma(mangle) to work on templates(or structs).
[...]


I don't know why but it looks like it only works on functions. 
Even if a struct is not a template the custom symbol mangle 
won't be handled:


---
import std.stdio;

pragma(mangle, "a0") class MyClass{}
pragma(mangle, "a1") struct MyStruct{}
pragma(mangle, "a2") void body_func();
pragma(mangle, "a3") struct MyStructh
{ pragma(mangle, "a4") void foo(){}}

void main()
{
writeln(MyClass.mangleof);
writeln(MyStruct.mangleof);
writeln(body_func.mangleof);
writeln(MyStructh.mangleof);
writeln(MyStructh.foo.mangleof);
}
---

which outputs:

---
C13temp_019455687MyClass
S13temp_019455688MyStruct
a2
S13temp_019455689MyStructh
a4
---

'a4' being printed and not 'a3' is interesting BTW ;)
I think that the manual is not clear enough about this pragma:

http://dlang.org/pragma.html#mangle

Unless the spec. are more detailed this could be considered as 
a bug.





Re: Visual Studio Code

2015-08-16 Thread bitwise via Digitalmars-d

On Sunday, 16 August 2015 at 05:12:06 UTC, Joakim wrote:

On Saturday, 15 August 2015 at 18:04:20 UTC, bitwise wrote:
Just a side note, looking at the main page of dlang.org, I 
don't see  any reference to who's using/contributing to D, or 
a link thereto.


I think it would help a lot of the logos of the D language's 
top sponsors could be seen somewhere on the main page. Maybe 
along the bottom as "Proud D users" or something. The top ~10 
sponsors could be chosen based of the dollar amounts or man 
hours contributed.


C++ has "Gold members" on their about page:
https://isocpp.org/about

Rust has a "Team" page:
https://www.rust-lang.org/team.html

Python has success stories:
https://www.python.org/about/success/

I could probably find more, but suffice it to say, it's a 
common occurrence.


Heh, funny you mention this, as I have a tab open in my browser 
open to the dlang.org github to remind me to submit a PR for 
just such an "about" page.  However, those examples are not 
that great for D, as it has no foundation or levels of 
sponsorship like C++, no formal teams like Rust, and that 
python page is actually not very good, though certainly long.


At the very least, the logos of Facebook and Sociomantic could be 
displayed at the bottom of the page. I'm not sure who else would 
be included, but I don't think Walter and Andrei would have any 
trouble coming up with a decent size list. The point is, I 
believe there should be "proof at a glance" that D is doing well 
in several real world scenarios.


I was thinking a page to briefly recap the language's genesis, 
introduce the two BDFLs, and mention corporate and project 
successes, along with some quotes from prominent users.


I believe there is a place for this information, but my specific 
recommendation is to present meaningful proof of D's usefulness 
to potential users as soon and succinctly as possible.


Feel free to submit a PR with what you have in mind and we 
could write it together:


Whether it would ever actually be merged is a different 
question. ;)


Unfortunately, I am a little out of the loop with respect to who 
exactly is using D, but if Walter or Andrei agreed with this 
idea, doing the actual work would be trivial.


Anyways, not making demands here, just my 2 cents :)

Bit


Re: Just updated to 2.068, get random profilegc.log created all over the place

2015-08-16 Thread Walter Bright via Digitalmars-d

On 8/16/2015 4:16 PM, deadalnix wrote:

It looks like every run of whatever I compiled generate a profilegc.log file,
that only contains :

bytes allocated, type, function, file:line

And that's it. Flag used to compile are -w -debug -gc -unittest . Bug or 
feature ?


Bug. Please post to bugzilla as a regression.


Just updated to 2.068, get random profilegc.log created all over the place

2015-08-16 Thread deadalnix via Digitalmars-d
It looks like every run of whatever I compiled generate a 
profilegc.log file, that only contains :


bytes allocated, type, function, file:line

And that's it. Flag used to compile are -w -debug -gc -unittest . 
Bug or feature ?


Re: Truly lazy ranges, transient .front, and std.range.Generator

2015-08-16 Thread Alex Parrill via Digitalmars-d
On Saturday, 15 August 2015 at 10:06:13 UTC, Joseph Rushton 
Wakeling wrote:

...


I had this issue recently when reading from a command-line-style 
TCP connection; I needed to read the line up to the \n separator, 
but consuming the separator meant waiting for the next byte that 
would never arrive unless a new command was sent.


So I made a wrapper range that evaluates the wrapped range's 
popFront only when front/empty is first called ("just in time"). 
Source code here: 
https://gist.github.com/ColonelThirtyTwo/0dfe76520efcda02d848


You can throw it in a UFCS chain anywhere except (for some 
reason) after something that takes a delegate template parameter 
like map. For example:


auto reader = SocketReader(socket).joiner.jitRange.map!(byt 
=> cast(char) byt);




Re: std.data.json formal review

2015-08-16 Thread Walter Bright via Digitalmars-d

On 8/16/2015 5:34 AM, Sönke Ludwig wrote:

Am 16.08.2015 um 02:50 schrieb Walter Bright:

 if (isInputRange!R && is(Unqual!(ElementEncodingType!R) == char))

I'm not a fan of more names for trivia, the deluge of names has its own
costs.


Good, I'll use `if (isInputRange!R && (isSomeChar!(ElementEncodingType!R) ||
isIntegral!(ElementEncodingType!R))`. It's just used in number of places and
quite a bit more verbose (twice as long) and I guess a large number of
algorithms in Phobos accept char ranges, so that may actually warrant a name in
this case.


Except that there is no reason to support wchar, dchar, int, ubyte, or anything 
other than char. The idea is not to support something just because you can, but 
there should be an identifiable, real use case for it first. Has anyone ever 
seen Json data as ulongs? I haven't either.




The json parser will work fine without doing any validation at all. I've
been implementing string handling code in Phobos with the idea of doing
validation only if the algorithm requires it, and only for those parts
that require it.


Yes, and it won't do that if a char range is passed in. If the integral range
path gets removed there are basically two possibilities left, perform the
validation up-front (slower), or risk UTF exceptions in unrelated parts of the
code base. I don't see why we shouldn't take the opportunity for a full and fast
validation here. But I'll relay this to Andrei, it was his idea originally.


That argument could be used to justify validation in every single algorithm that 
deals with strings.




Why do both? Always return an input range. If the user wants a string,
he can pipe the input range to a string generator, such as .array

Convenience for one.


Back to the previous point, that means that every algorithm in Phobos
should have two versions, one that returns a range and the other a
string? All these variations will result in a combinatorical explosion.


This may be a factor of two, but not a combinatorial explosion.


We're already up to validate or not, to string or not, i.e. 4 combinations.



The other problem, of course, is that returning a string means the
algorithm has to decide how to allocate that string. As much as
possible, algorithms should not be making allocation decisions.


Granted, the fact that format() and to!() support input ranges (I didn't notice
that until now) makes the issue less important. But without those, it would
basically mean that almost all places that generate JSON strings would have to
import std.array and append .array. Nothing particularly bad if viewed in
isolation, but makes the language appear a lot less clean/more verbose if it
occurs often. It's also a stepping stone for language newcomers.


This has been argued before, and the problem is it applies to EVERY algorithm in 
Phobos, and winds up with a doubling of the number of functions to deal with it. 
I do not view this as clean.


D is going to be built around ranges as a fundamental way of coding. Users will 
need to learn something about them. Appending .array is not a big hill to climb.




There are output range and allocation based float->string conversions available,
but no input range based one. But well, using an internal buffer together with
formattedWrite would probably be a viable workaround...


I plan to fix that, so using a workaround in the meantime is appropriate.



Re: Mid level IR

2015-08-16 Thread deadalnix via Digitalmars-d
On Sunday, 16 August 2015 at 10:12:07 UTC, Ola Fosheim Grøstad 
wrote:

On Saturday, 15 August 2015 at 21:13:33 UTC, deadalnix wrote:
On Friday, 14 August 2015 at 16:13:07 UTC, Ola Fosheim Grøstad 
wrote:


Another option would be to interface with Rust by generating 
Rust MIR from D, if that is possible.


What do you think?


This is what SDC is doing already.


Interesting, is the format documented somewhere?


No, and it likely to change significantly during the dev at this 
stage.


Re: std.data.json formal review

2015-08-16 Thread Jacob Carlborg via Digitalmars-d

On 2015-08-16 14:34, Sönke Ludwig wrote:


Good, I'll use `if (isInputRange!R &&
(isSomeChar!(ElementEncodingType!R) ||
isIntegral!(ElementEncodingType!R))`. It's just used in number of places
and quite a bit more verbose (twice as long) and I guess a large number
of algorithms in Phobos accept char ranges, so that may actually warrant
a name in this case.


I agree. Signatures like this are what's making std.algorithm look more 
complicated than it is.


--
/Jacob Carlborg


Re: D for project in computational chemistry

2015-08-16 Thread Idan Arye via Digitalmars-d

On Sunday, 16 August 2015 at 13:11:12 UTC, Yura wrote:

Good afternoon, gentlemen,

just want to describe my very limited experience. I have 
re-written about half of my Python code into D. I got it faster 
by 6 times. This is a good news.


However, I was amazed by performance of D vs Python for 
following simple nested loops (see below). D was faster by 2 
order of magnitude!


Bearing in mind that Python is really used in computational 
chemistry/bioinformatics, I am sure D can be a good option in 
this field. In the modern strategy for the computational 
software python is used as a glue language and the number 
crunching parts are usually written in Fortran or C/C++. 
Apparently, with D one language can be used to write the entire 
code. Please, also look at this article:


http://www.worldcomp-proceedings.com/proc/p2012/PDP3426.pdf

Also, I wander about the results of this internship:

http://forum.dlang.org/post/laha9j$pc$1...@digitalmars.com

With kind regards,
Yury


Python:

#!/usr/bin/python
import sys, string, os, glob, random
from math import *

a = 0

l = 1000

for i in range(l):
for j in range(l):
for m in range(l):
a = a +i*i*0.7+j*j*0.8+m*m*0.9

print a

D:

import std.stdio;
// command line argument
import std.getopt;
import std.string;
import std.array;
import std.conv;
import std.math;

// main program starts here
void main(string[] args) {


int l = 1000;
double a = 0;
for (auto i=0;i

Initially I thought the Python version is so slow because it uses 
`range` instead of `xrange`, but I tried them both and they both 
take about the same, so I guess the Python JIT(or even 
interpreter!) can optimize these allocations away.


BTW - if you want to iterate over a range of numbers in D, you 
can use a foreach loop:


foreach (i; 0 .. l) {
foreach (j; 0 .. l) {
foreach (m; 0 .. l) {
a = a + i * i * 0.7 + j * j * 0.8 + m * m * 0.9;
}

}
}

Or, to make it look more like the Python version, you can iterate 
over a range-returning function:


import std.range : iota;
foreach (i; iota(l)) {
foreach (j; iota(l)) {
foreach (m; iota(l)) {
a = a + i * i * 0.7 + j * j * 0.8 + m * m * 0.9;
}

}
}

There are also functions for building ranges from other ranges:

import std.algorithm : cartesianProduct;
import std.range : iota;
foreach (i, j, m; cartesianProduct(iota(l), iota(l), 
iota(l))) {

a = a + i * i * 0.7 + j * j * 0.8 + m * m * 0.9;
}

Keep in mind though that using these functions, while making the 
code more readable(to those with some experience in D, at least), 
is bad for performance - for my first version I got about 5 
seconds when building with DMD in debug mode, while for the last 
version I get 13 seconds when building with LDC in release mode.


Re: D for project in computational chemistry

2015-08-16 Thread Rikki Cattermole via Digitalmars-d

On 17/08/2015 1:11 a.m., Yura wrote:

Good afternoon, gentlemen,

just want to describe my very limited experience. I have re-written
about half of my Python code into D. I got it faster by 6 times. This is
a good news.

However, I was amazed by performance of D vs Python for following simple
nested loops (see below). D was faster by 2 order of magnitude!

Bearing in mind that Python is really used in computational
chemistry/bioinformatics, I am sure D can be a good option in this
field. In the modern strategy for the computational software python is
used as a glue language and the number crunching parts are usually
written in Fortran or C/C++. Apparently, with D one language can be used
to write the entire code. Please, also look at this article:

http://www.worldcomp-proceedings.com/proc/p2012/PDP3426.pdf

Also, I wander about the results of this internship:

http://forum.dlang.org/post/laha9j$pc$1...@digitalmars.com

With kind regards,
Yury


Python:

#!/usr/bin/python
import sys, string, os, glob, random
from math import *

a = 0

l = 1000

for i in range(l):
 for j in range(l):
 for m in range(l):
 a = a +i*i*0.7+j*j*0.8+m*m*0.9

print a

D:

import std.stdio;
// command line argument
import std.getopt;
import std.string;
import std.array;
import std.conv;
import std.math;

// main program starts here
void main(string[] args) {


int l = 1000;
double a = 0;
for (auto i=0;i

Any chance for when you get the time/content, to create a research paper 
using your use case?

It would be amazing publicity and even more so to get it published!

Otherwise, we could always do with another user story :)


Re: D for project in computational chemistry

2015-08-16 Thread Yura via Digitalmars-d

Good afternoon, gentlemen,

just want to describe my very limited experience. I have 
re-written about half of my Python code into D. I got it faster 
by 6 times. This is a good news.


However, I was amazed by performance of D vs Python for following 
simple nested loops (see below). D was faster by 2 order of 
magnitude!


Bearing in mind that Python is really used in computational 
chemistry/bioinformatics, I am sure D can be a good option in 
this field. In the modern strategy for the computational software 
python is used as a glue language and the number crunching parts 
are usually written in Fortran or C/C++. Apparently, with D one 
language can be used to write the entire code. Please, also look 
at this article:


http://www.worldcomp-proceedings.com/proc/p2012/PDP3426.pdf

Also, I wander about the results of this internship:

http://forum.dlang.org/post/laha9j$pc$1...@digitalmars.com

With kind regards,
Yury


Python:

#!/usr/bin/python
import sys, string, os, glob, random
from math import *

a = 0

l = 1000

for i in range(l):
for j in range(l):
for m in range(l):
a = a +i*i*0.7+j*j*0.8+m*m*0.9

print a

D:

import std.stdio;
// command line argument
import std.getopt;
import std.string;
import std.array;
import std.conv;
import std.math;

// main program starts here
void main(string[] args) {


int l = 1000;
double a = 0;
for (auto i=0;i

Re: std.data.json formal review

2015-08-16 Thread Sönke Ludwig via Digitalmars-d

Am 16.08.2015 um 02:50 schrieb Walter Bright:

On 8/15/2015 3:18 AM, Sönke Ludwig wrote:

I don't know what 'isStringInputRange' is. Whatever it is, it should be
a 'range of char'.


I'll rename it to isCharInputRange. We don't have something like that
in Phobos,
right?


That's right, there isn't one. But I use:

 if (isInputRange!R && is(Unqual!(ElementEncodingType!R) == char))

I'm not a fan of more names for trivia, the deluge of names has its own
costs.


Good, I'll use `if (isInputRange!R && 
(isSomeChar!(ElementEncodingType!R) || 
isIntegral!(ElementEncodingType!R))`. It's just used in number of places 
and quite a bit more verbose (twice as long) and I guess a large number 
of algorithms in Phobos accept char ranges, so that may actually warrant 
a name in this case.



There is no reason to validate UTF-8 input. The only place where
non-ASCII code units can even legally appear is inside strings, and
there they can just be copied verbatim while looking for the end of the
string.

The idea is to assume that any char based input is already valid UTF
(as D
defines it), while integer based input comes from an unverified
source, so that
it still has to be validated before being cast/copied into a 'string'.
I think
this is a sensible approach, both semantically and performance-wise.


The json parser will work fine without doing any validation at all. I've
been implementing string handling code in Phobos with the idea of doing
validation only if the algorithm requires it, and only for those parts
that require it.


Yes, and it won't do that if a char range is passed in. If the integral 
range path gets removed there are basically two possibilities left, 
perform the validation up-front (slower), or risk UTF exceptions in 
unrelated parts of the code base. I don't see why we shouldn't take the 
opportunity for a full and fast validation here. But I'll relay this to 
Andrei, it was his idea originally.



There are many validation algorithms in Phobos one can tack on - having
two implementations of every algorithm, one with an embedded reinvented
validation and one without - is too much.


There is nothing reinvented here. It simply implicitly validates all 
non-string parts of a JSON document and uses validate() for parts of 
JSON strings that can contain unicode characters.



The general idea with algorithms is that they do not combine things, but
they enable composition.


It's just that there is no way to achieve the same performance using 
composition in this case.



Why do both? Always return an input range. If the user wants a string,
he can pipe the input range to a string generator, such as .array

Convenience for one.


Back to the previous point, that means that every algorithm in Phobos
should have two versions, one that returns a range and the other a
string? All these variations will result in a combinatorical explosion.


This may be a factor of two, but not a combinatorial explosion.


The other problem, of course, is that returning a string means the
algorithm has to decide how to allocate that string. As much as
possible, algorithms should not be making allocation decisions.


Granted, the fact that format() and to!() support input ranges (I didn't 
notice that until now) makes the issue less important. But without 
those, it would basically mean that almost all places that generate JSON 
strings would have to import std.array and append .array. Nothing 
particularly bad if viewed in isolation, but makes the language appear a 
lot less clean/more verbose if it occurs often. It's also a stepping 
stone for language newcomers.



The lack of number to input range conversion functions is
another concern. I'm not really keen to implement an input range style
floating-point to string conversion routine just for this module.


Not sure what you mean. Phobos needs such routines anyway, and you still
have to do something about floating point.


There are output range and allocation based float->string conversions 
available, but no input range based one. But well, using an internal 
buffer together with formattedWrite would probably be a viable workaround...



Finally, I'm a little worried about performance. The output range
based approach
can keep a lot of state implicitly using the program counter register.
But an
input range would explicitly have to keep track of the current JSON
element, as
well as the current character/state within that element (and possibly
one level
deeper, for example for escape sequences). This means that it will
require
either multiple branches or indirection for each popFront().


Often this is made up for by not needing to allocate storage. Also, that
state is in the cached "hot zone" on top of the stack, which is much
faster to access than a cold uninitialized array.


Just branch misprediction will most probably be problematic. But I think 
this can be made fast enough anyway by making the input range partially 
eager and serving chunks of strings at a time.

Re: std.data.json formal review

2015-08-16 Thread Walter Bright via Digitalmars-d

On 8/16/2015 3:39 AM, Dmitry Olshansky wrote:

About x2 faster then decode + check-if-alphabetic on my stuff:

https://github.com/DmitryOlshansky/gsoc-bench-2012

I haven't updated it in a while. There are nice bargraphs for decoding versions
by David comparing DMD vs LDC vs GDC:

Page 15 at http://dconf.org/2013/talks/nadlinger.pdf


Thank you.


Re: std.data.json formal review

2015-08-16 Thread Dmitry Olshansky via Digitalmars-d

On 16-Aug-2015 11:30, Walter Bright wrote:

On 8/15/2015 11:52 PM, Dmitry Olshansky wrote:

For instance "combining" decoding and character classification one may
side-step
generating the codepoint value itself (because now it doesn't have to
produce it
for the top-level algorithm).


Perhaps, but I wouldn't be convinced without benchmarks to prove it on a
case-by-case basis.


About x2 faster then decode + check-if-alphabetic on my stuff:

https://github.com/DmitryOlshansky/gsoc-bench-2012

I haven't updated it in a while. There are nice bargraphs for decoding 
versions by David comparing DMD vs LDC vs GDC:


Page 15 at http://dconf.org/2013/talks/nadlinger.pdf



But it's moot, as json lexing never needs to decode.


Agreed.

--
Dmitry Olshansky


Re: Mid level IR

2015-08-16 Thread via Digitalmars-d

On Saturday, 15 August 2015 at 21:13:33 UTC, deadalnix wrote:
On Friday, 14 August 2015 at 16:13:07 UTC, Ola Fosheim Grøstad 
wrote:


Another option would be to interface with Rust by generating 
Rust MIR from D, if that is possible.


What do you think?


This is what SDC is doing already.


Interesting, is the format documented somewhere?


Re: std.data.json formal review

2015-08-16 Thread Walter Bright via Digitalmars-d

On 8/15/2015 11:52 PM, Dmitry Olshansky wrote:

For instance "combining" decoding and character classification one may side-step
generating the codepoint value itself (because now it doesn't have to produce it
for the top-level algorithm).


Perhaps, but I wouldn't be convinced without benchmarks to prove it on a 
case-by-case basis.


But it's moot, as json lexing never needs to decode.