Re: EMSI has a Github page

2014-06-27 Thread Walter Bright via Digitalmars-d-announce

On 6/26/2014 2:26 PM, Brian Schott wrote:

https://github.com/economicmodeling

Stuff that's been made available:
* D implementation of the DDoc macro processor
* Documentation generator that doesn't need the compiler
 - No more requirement to use all the -I options to just get docs.
 - Template constraints don't vanish.
 - size_t doesn't turn into ulong.
 - Javascript-based offline search.
* Containers library backed by std.allocator
 - Less sitting around waiting for the GC


Very nice. Thank you!


Re: EMSI has a Github page

2014-06-27 Thread Robert Schadek via Digitalmars-d-announce
On 06/27/2014 09:16 AM, Walter Bright via Digitalmars-d-announce wrote:
 On 6/26/2014 2:26 PM, Brian Schott wrote:
 https://github.com/economicmodeling

 Stuff that's been made available:
 * D implementation of the DDoc macro processor
 * Documentation generator that doesn't need the compiler
  - No more requirement to use all the -I options to just get docs.
  - Template constraints don't vanish.
  - size_t doesn't turn into ulong.
  - Javascript-based offline search.
 * Containers library backed by std.allocator
  - Less sitting around waiting for the GC

 Very nice. Thank you!
Indeed, very nice!

but where is the dub package?


Re: EMSI has a Github page

2014-06-27 Thread Jacob Carlborg via Digitalmars-d-announce

On 2014-06-26 23:26, Brian Schott wrote:

* Documentation generator that doesn't need the compiler


Do you have any example of documentation generated with this tool?

--
/Jacob Carlborg


Re: EMSI has a Github page

2014-06-27 Thread Dicebot via Digitalmars-d-announce

On Thursday, 26 June 2014 at 21:26:55 UTC, Brian Schott wrote:

* Documentation generator that doesn't need the compiler


How does it relate to ddox?


DConf Day 1 Panel with Walter Bright and Andrei Alexandrescu

2014-06-27 Thread Andrei Alexandrescu via Digitalmars-d-announce

http://www.reddit.com/r/programming/comments/298vtt/dconf_2014_panel_with_walter_bright_and_andrei/

https://twitter.com/D_Programming/status/482546357690187776

https://news.ycombinator.com/newest

https://www.facebook.com/dlang.org/posts/874091959271153


Andrei


Re: DConf Day 1 Panel with Walter Bright and Andrei Alexandrescu

2014-06-27 Thread Dicebot via Digitalmars-d-announce

http://youtu.be/TNvUIWFy02I


Re: EMSI has a Github page

2014-06-27 Thread Kagamin via Digitalmars-d-announce

https://github.com/economicmodeling/containers/blob/master/src/containers/dynamicarray.d#L72

Does this work? You try to remove new range instead of old one. 
Also you should remove old range only after you added new range, 
so that GC won't catch you in the middle.


Re: EMSI has a Github page

2014-06-27 Thread Kagamin via Digitalmars-d-announce
And then it will still be able to catch you between realloc and 
addRange.


Re: DConf Day 1 Panel with Walter Bright and Andrei Alexandrescu

2014-06-27 Thread Walter Bright via Digitalmars-d-announce

On 6/27/2014 12:53 PM, Dicebot wrote:

http://youtu.be/TNvUIWFy02I


Ack, need to work on my posture :-(


Re: EMSI has a Github page

2014-06-27 Thread Brian Schott via Digitalmars-d-announce

On Friday, 27 June 2014 at 20:33:22 UTC, Kagamin wrote:

https://github.com/economicmodeling/containers/blob/master/src/containers/dynamicarray.d#L72

Does this work? You try to remove new range instead of old one. 
Also you should remove old range only after you added new 
range, so that GC won't catch you in the middle.


The issue tracker is located here: 
https://github.com/economicmodeling/containers/issues


Re: EMSI has a Github page

2014-06-27 Thread Brian Schott via Digitalmars-d-announce

On Friday, 27 June 2014 at 12:31:09 UTC, Dicebot wrote:

On Thursday, 26 June 2014 at 21:26:55 UTC, Brian Schott wrote:

* Documentation generator that doesn't need the compiler


How does it relate to ddox?


DDOX uses the compiler's JSON output. This new documentation 
generator only looks at the code.




Re: DConf Day 1 Panel with Walter Bright and Andrei Alexandrescu

2014-06-27 Thread safety0ff via Digitalmars-d-announce

I have two questions that I've come upon lately:

1) How was it decided that there should be implicit conversion 
between signed and unsigned integers in arithmetic operations, 
and why prefer unsigned numbers?

E.g. Signed / Unsigned = Unsigned
Is this simply compatibility with C or is there something greater 
behind this decision.


2) With regard to reducing template instantiations:
I've been using a technique similar to the one mentioned in the 
video: separating functions out of templates to reduce bloat.

My question is: does a template such as:
T foo(T)(T x)
if (isIntegral!T) { return x; }

Get instantiated multiple times for const, immutable, etc. 
qualifiers on the input?


Re: DConf Day 1 Panel with Walter Bright and Andrei Alexandrescu

2014-06-27 Thread Peter Alexander via Digitalmars-d-announce

On Friday, 27 June 2014 at 23:30:39 UTC, safety0ff wrote:

2) With regard to reducing template instantiations:
I've been using a technique similar to the one mentioned in the 
video: separating functions out of templates to reduce bloat.

My question is: does a template such as:
T foo(T)(T x)
if (isIntegral!T) { return x; }

Get instantiated multiple times for const, immutable, etc. 
qualifiers on the input?


Yes, but bear in mind that those qualifiers are often stripped
with IFTI, e.g.:

int a;
const int b;
immutable int c;
foo(a);
foo(b);
foo(c);

These all call foo!int


Re: DConf Day 1 Panel with Walter Bright and Andrei Alexandrescu

2014-06-27 Thread safety0ff via Digitalmars-d-announce

On Saturday, 28 June 2014 at 02:02:28 UTC, Peter Alexander wrote:

On Friday, 27 June 2014 at 23:30:39 UTC, safety0ff wrote:

2) With regard to reducing template instantiations:
I've been using a technique similar to the one mentioned in 
the video: separating functions out of templates to reduce 
bloat.

My question is: does a template such as:
T foo(T)(T x)
if (isIntegral!T) { return x; }

Get instantiated multiple times for const, immutable, etc. 
qualifiers on the input?


Yes, but bear in mind that those qualifiers are often stripped
with IFTI, e.g.:

int a;
const int b;
immutable int c;
foo(a);
foo(b);
foo(c);

These all call foo!int


Awesome, thanks!


Re: DConf Day 1 Panel with Walter Bright and Andrei Alexandrescu

2014-06-27 Thread Peter Alexander via Digitalmars-d-announce

On Saturday, 28 June 2014 at 02:46:25 UTC, safety0ff wrote:

On Saturday, 28 June 2014 at 02:02:28 UTC, Peter Alexander

int a;
const int b;
immutable int c;
foo(a);
foo(b);
foo(c);

These all call foo!int


Awesome, thanks!


... I just tried this and I'm wrong. The qualifier isn't 
stripped. Gah! Three different versions!


I could have sworn D did this for primitive types. This makes me 
sad :-(


Re: DConf Day 1 Panel with Walter Bright and Andrei Alexandrescu

2014-06-27 Thread safety0ff via Digitalmars-d-announce

On Saturday, 28 June 2014 at 03:33:37 UTC, Peter Alexander wrote:


... I just tried this and I'm wrong. The qualifier isn't 
stripped. Gah! Three different versions!


I could have sworn D did this for primitive types. This makes 
me sad :-(


I guess you can make all kinds of code that depends on the 
qualifier.


I tried using ld.gold to play with icf (identical code folding,) 
but I did not manage to get a working binary out of gold 
(regardless of icf.)


Re: std.math performance (SSE vs. real)

2014-06-27 Thread Iain Buclaw via Digitalmars-d
On 27 June 2014 02:31, David Nadlinger via Digitalmars-d
digitalmars-d@puremagic.com wrote:
 Hi all,

 right now, the use of std.math over core.stdc.math can cause a huge
 performance problem in typical floating point graphics code. An instance of
 this has recently been discussed here in the Perlin noise benchmark speed
 thread [1], where even LDC, which already beat DMD by a factor of two,
 generated code more than twice as slow as that by Clang and GCC. Here, the
 use of floor() causes trouble. [2]

 Besides the somewhat slow pure D implementations in std.math, the biggest
 problem is the fact that std.math almost exclusively uses reals in its API.
 When working with single- or double-precision floating point numbers, this
 is not only more data to shuffle around than necessary, but on x86_64
 requires the caller to transfer the arguments from the SSE registers onto
 the x87 stack and then convert the result back again. Needless to say, this
 is a serious performance hazard. In fact, this accounts for an 1.9x slowdown
 in the above benchmark with LDC.

 Because of this, I propose to add float and double overloads (at the very
 least the double ones) for all of the commonly used functions in std.math.
 This is unlikely to break much code, but:
  a) Somebody could rely on the fact that the calls effectively widen the
 calculation to 80 bits on x86 when using type deduction.
  b) Additional overloads make e.g. floor ambiguous without context, of
 course.

 What do you think?

 Cheers,
 David


This is the reason why floor is slow, it has an array copy operation.

---
  auto vu = *cast(ushort[real.sizeof/2]*)(x);
---

I didn't like it at the time I wrote, but at least it prevented the
compiler (gdc) from removing all bit operations that followed.

If there is an alternative to the above, then I'd imagine that would
speed up floor by tenfold.

Regards
Iain


Re: std.math performance (SSE vs. real)

2014-06-27 Thread Iain Buclaw via Digitalmars-d
On 27 June 2014 07:14, Iain Buclaw ibuc...@gdcproject.org wrote:
 On 27 June 2014 02:31, David Nadlinger via Digitalmars-d
 digitalmars-d@puremagic.com wrote:
 Hi all,

 right now, the use of std.math over core.stdc.math can cause a huge
 performance problem in typical floating point graphics code. An instance of
 this has recently been discussed here in the Perlin noise benchmark speed
 thread [1], where even LDC, which already beat DMD by a factor of two,
 generated code more than twice as slow as that by Clang and GCC. Here, the
 use of floor() causes trouble. [2]

 Besides the somewhat slow pure D implementations in std.math, the biggest
 problem is the fact that std.math almost exclusively uses reals in its API.
 When working with single- or double-precision floating point numbers, this
 is not only more data to shuffle around than necessary, but on x86_64
 requires the caller to transfer the arguments from the SSE registers onto
 the x87 stack and then convert the result back again. Needless to say, this
 is a serious performance hazard. In fact, this accounts for an 1.9x slowdown
 in the above benchmark with LDC.

 Because of this, I propose to add float and double overloads (at the very
 least the double ones) for all of the commonly used functions in std.math.
 This is unlikely to break much code, but:
  a) Somebody could rely on the fact that the calls effectively widen the
 calculation to 80 bits on x86 when using type deduction.
  b) Additional overloads make e.g. floor ambiguous without context, of
 course.

 What do you think?

 Cheers,
 David


 This is the reason why floor is slow, it has an array copy operation.

 ---
   auto vu = *cast(ushort[real.sizeof/2]*)(x);
 ---

 I didn't like it at the time I wrote, but at least it prevented the
 compiler (gdc) from removing all bit operations that followed.

 If there is an alternative to the above, then I'd imagine that would
 speed up floor by tenfold.


Can you test with this?

https://github.com/D-Programming-Language/phobos/pull/2274

Float and Double implementations of floor/ceil are trivial and I can add later.


Re: Module level variable shadowing

2014-06-27 Thread Jacob Carlborg via Digitalmars-d

On 2014-06-26 02:41, Walter Bright wrote:


I suggest that your issues with global variables can be mitigated by
adopting a distinct naming convention for your globals. Frankly, I think
a global variable named x is execrable style - such short names should
be reserved for locals.


No need to have a naming convention. As Teoh said, just always prefix 
the global variables with a dot.


--
/Jacob Carlborg


Re: A Perspective on D from game industry

2014-06-27 Thread Jacob Carlborg via Digitalmars-d

On 2014-06-27 03:16, Nick Sabalausky wrote:


There's other times I've had to get by without debuggers too. Like, in
the earlier days of web dev, it was common to not have a debugger. Or
debugging JS problems that only manifested on Safari (I assume Safari
probably has JS diagnostics/debugging now, but it didn't always. That
was a pain.)


These days there is something called Firebug Lite [1]. It's like Firebug 
but it's written purely in JavaScript. That means you can use it like a 
booklet in browsers like IE6, iPhone or other phones that doesn't have a 
debugger. I think it's even better than the one in latest IE. The 
downside is, if there's a JavaScript error the debugger might not run :(.


[1] https://getfirebug.com/firebuglite

--
/Jacob Carlborg


Re: A Perspective on D from game industry

2014-06-27 Thread Jacob Carlborg via Digitalmars-d

On 2014-06-27 00:57, Sean Kelly wrote:


Yep.  A lot of this is probably because as a server programmer
I've just gotten used to finding bugs this way as a matter of
necessity, but in many cases I actually prefer it to interactive
debugging.  For example, build core.demangle with -debug=trace
and -debug=info set.


I don't know about other debuggers but with LLDB you can set a 
breakpoint, add commands to that breakpoint, which will be executed when 
the breakpoint is hit. Then continue the execution. This means you don't 
need to use the debugger interactively, if you don't want to.


--
/Jacob Carlborg


Re: Module level variable shadowing

2014-06-27 Thread dennis luehring via Digitalmars-d

Am 26.06.2014 02:41, schrieb Walter Bright:

On 6/25/2014 4:03 PM, bearophile wrote:

The simplest way to avoid that kind of bugs is give a shadowing global x error
(similar to the shadowing errors D gives with foreach and with statements). But
this breaks most existing D code.


D has scoped lookup. Taking your proposal as principle, where do we stop at
issuing errors when there is the same identifier in multiple in-scope scopes? I
think we hit the sweet spot at restricting shadowing detection to local scopes.

I suggest that your issues with global variables can be mitigated by adopting a
distinct naming convention for your globals. Frankly, I think a global variable
named x is execrable style - such short names should be reserved for locals.



what about adding tests -no-global-shadowing (or others) to dmd and tell 
people to use it - poeple will definitly change there global names then 
(like your advised of renameing or using .x etc) and after a while it 
could become a warning, then an error - like the time between 
deprecation and removal of an feature - D need more strategies then C++ 
to add better qualitity over time




Re: Module level variable shadowing

2014-06-27 Thread dennis luehring via Digitalmars-d

Am 27.06.2014 10:20, schrieb dennis luehring:

I

think we hit the sweet spot at restricting shadowing detection to local scopes.


sweet does not mean - use a better name or .x to avoid manualy hard to 
detect problems - its like disabled shadow detection in local scopes


what i don't understand - why on earth should someone want to shadow 
a(or better any) variable at all?


Re: A Perspective on D from game industry

2014-06-27 Thread Manu via Digitalmars-d
On 27 June 2014 11:16, Nick Sabalausky via Digitalmars-d
digitalmars-d@puremagic.com wrote:
 On 6/26/2014 7:24 PM, H. S. Teoh via Digitalmars-d wrote:

 On Thu, Jun 26, 2014 at 10:57:28PM +, Sean Kelly via Digitalmars-d
 wrote:

 On Thursday, 19 June 2014 at 05:35:06 UTC, Nick Sabalausky wrote:


 That's why I inadvertently learned to love printf debugging. I get to
 see the whole chart at one.


 Yep.  A lot of this is probably because as a server programmer
 I've just gotten used to finding bugs this way as a matter of
 necessity, but in many cases I actually prefer it to interactive
 debugging.  For example, build core.demangle with -debug=trace
 and -debug=info set.


 Over the years, I've come to prefer printf debugging too.

 At my job I work with headless embedded systems, and interactive
 debugging can only be done remotely.


 Aye. Sometimes in embedded work, you're *lucky* if you can even do printf at
 all, let alone a debugger. I've had to debug with as little as one LED.
 It's...umm...interesting. And time consuming. Especially when it's ASM.
 (But somewhat of a proud-yet-twisted rite of passage though ;) )

 There's other times I've had to get by without debuggers too. Like, in the
 earlier days of web dev, it was common to not have a debugger. Or debugging
 JS problems that only manifested on Safari (I assume Safari probably has JS
 diagnostics/debugging now, but it didn't always. That was a pain.)

Aye, I wrote my former company's PSP engine with nothing more than the
unit's power light as a debugging tool (at least until I managed to
initialise the display hardware and render something).
I would while(1) around the place... if it reached that point, the
power light stayed on. If it crashed before it reached that point, the
power light went off (after a 20 second delay, which made every single
execution a suspenseful experience!).


Re: D Logos

2014-06-27 Thread Alix Pexton via Digitalmars-d

On 26/06/2014 9:15 PM, Wyatt wrote:


I'll first admit I'm not fond of the font. I do agree with a sans serif
with fairly thick stroke, but I don't like the vertical stress and I
think a wider counter definitely looks better with these proportions.
What's the font in the current logo?


I don't know for sure, but I don't think its from a named font, rather 
it is probably hand drawn. (Designers tend to create a lot of partial 
fonts for customers because of licensing issues.) Very few typefaces 
have Ds that are wider that they are tall, like the one in the current 
logo appears.



The top two, I'm not sure how I feel about the hard edges.  More
importantly, though, I don't think there's enough contrast between the
red/pink and the white, so the D is rather hard to see.  How about
going darker on those?  Another option might be to give the letter a
thin outline.


The main red that I chose to use in all the variations so far is 
#DD, which was entirely a coincidence, but I do like the 
association. In those top two images, 90% of the time they took to draw 
was in the selection of a shade of pink that I thought contrasted nicely 
between the red and the white, but that is obviously very subjective.



The bottom six, I'm not keen on the use of black.  I find it clashes
harshly with the white and red.  But changing the white would muddle the
contrast with red, so maybe ease the black back to something a bit more
mild...say, #1F252B?  The shadows on the moons are nice, though; gives
them some interest and lightening the black would mess with that...
maybe make the shadows a darker red?  Not sure on that part.


What is Black, white an Red all over?
There is a named colour called Outer Space (#414A4C), but I think its 
meant as a shade of wax crayon. It doesn't look too bad however with, as 
you suggested, redder shadows on the moons.



-Wyatt


Thanks for your feed back ^^

A...



Re: std.math performance (SSE vs. real)

2014-06-27 Thread hane via Digitalmars-d
On Friday, 27 June 2014 at 06:48:44 UTC, Iain Buclaw via 
Digitalmars-d wrote:
On 27 June 2014 07:14, Iain Buclaw ibuc...@gdcproject.org 
wrote:

On 27 June 2014 02:31, David Nadlinger via Digitalmars-d
digitalmars-d@puremagic.com wrote:

Hi all,

right now, the use of std.math over core.stdc.math can cause 
a huge
performance problem in typical floating point graphics code. 
An instance of
this has recently been discussed here in the Perlin noise 
benchmark speed
thread [1], where even LDC, which already beat DMD by a 
factor of two,
generated code more than twice as slow as that by Clang and 
GCC. Here, the

use of floor() causes trouble. [2]

Besides the somewhat slow pure D implementations in std.math, 
the biggest
problem is the fact that std.math almost exclusively uses 
reals in its API.
When working with single- or double-precision floating point 
numbers, this
is not only more data to shuffle around than necessary, but 
on x86_64
requires the caller to transfer the arguments from the SSE 
registers onto
the x87 stack and then convert the result back again. 
Needless to say, this
is a serious performance hazard. In fact, this accounts for 
an 1.9x slowdown

in the above benchmark with LDC.

Because of this, I propose to add float and double overloads 
(at the very
least the double ones) for all of the commonly used functions 
in std.math.

This is unlikely to break much code, but:
 a) Somebody could rely on the fact that the calls 
effectively widen the

calculation to 80 bits on x86 when using type deduction.
 b) Additional overloads make e.g. floor ambiguous without 
context, of

course.

What do you think?

Cheers,
David



This is the reason why floor is slow, it has an array copy 
operation.


---
  auto vu = *cast(ushort[real.sizeof/2]*)(x);
---

I didn't like it at the time I wrote, but at least it 
prevented the

compiler (gdc) from removing all bit operations that followed.

If there is an alternative to the above, then I'd imagine that 
would

speed up floor by tenfold.



Can you test with this?

https://github.com/D-Programming-Language/phobos/pull/2274

Float and Double implementations of floor/ceil are trivial and 
I can add later.


Nice! I tested with the Perlin noise benchmark, and it got 
faster(in my environment, 1.030s - 0.848s).

But floor still consumes almost half of the execution time.


Re: Bounty Increase on Issue #1325927

2014-06-27 Thread Don via Digitalmars-d

On Thursday, 26 June 2014 at 21:20:04 UTC, Joakim wrote:
On Thursday, 26 June 2014 at 17:52:13 UTC, Nick Sabalausky 
wrote:

On 6/26/2014 7:02 AM, Shammah Chancellor wrote:
I've increased the bounty on this bug.   Fast CTFE is very 
important.


https://www.bountysource.com/issues/1325927-ctfe-copy-on-write-is-slow-and-causes-huge-memory-usage



This is great news, and I'm sure very much appreciated by all.

I can't help being a little concerned over issue ownership, 
though. My understanding is that Don's already done a large 
amount of work towards this issue. I wonder if that could 
actually be holding people back from contributing to the 
issue, for fear of taking whole pot unfairly (ie, swooping in 
and just doing the last little bit, or being perceived as 
attempting that), or fear of stirring up disagreement over 
money?


Don's a senior developer at a company that just got bought for 
$200 million.  I doubt he's stressing over a $400 bounty, ;) 
especially if it takes some work off his plate.


Yes, of course I'm not interested in bounties. But note that that 
issue is not really a bug, it's a project.
I put hundreds of hours of work into this, to get to the point 
where we are now - fixing the compiler structure to the point 
where a JIT is possible. That work was funded by an insolvency 
payout :). Daniel Murphy has done some work on it, as well.


I doubt bounties are effective as a motivation for this kind of 
thing.




Re: D Logos

2014-06-27 Thread Alix Pexton via Digitalmars-d

On 26/06/2014 9:34 PM, H. S. Teoh via Digitalmars-d wrote:


Of all these, I find that I like the bottom right one the most.


I divided the options into left and right based on the emotion that I 
felt the angle of the shadow on the moon suggested, the left side are 
the ones that I felt were happy and optimistic, the right side the 
sadder ones, so I'm very surprised that anyone would pick a favourite 
from the right side oO




One
thing that could improve, though, is the planet's margin shouldn't be
white; it clashes with the D. Maybe a slight reddening of the white band
should fix it.


Good call, but quick experimentation has not found just the right colour 
yet. It seems that in order not to disappear it needs to get darker.



Also, it may help if the margin of the shadows on the
moons were made a tad softer -- make the edge between the two halves of
each moon just a little less sharp.


I like this idea.



T



Thanks for your feedback.


Re: Bounty Increase on Issue #1325927

2014-06-27 Thread safety0ff via Digitalmars-d

On Friday, 27 June 2014 at 09:42:22 UTC, Don wrote:


Yes, of course I'm not interested in bounties. But note that 
that issue is not really a bug, it's a project.
I put hundreds of hours of work into this, to get to the point 
where we are now - fixing the compiler structure to the point 
where a JIT is possible. That work was funded by an insolvency 
payout :). Daniel Murphy has done some work on it, as well.


I doubt bounties are effective as a motivation for this kind of 
thing.


Is there any chance you could offer a brief summary of the state 
of things w.r.t. this issue?


I.e. expanding on this comment: Upgrading severity. I've done 
several commits to move towards a solution but I still need to do 
more restructuring to properly fix this.


Perhaps the bounty won't stimulate anybody who doesn't have other 
motivations to improve the situation, but more information about 
the scope of the issue would be helpful to both backers and 
potential claimants.


Re: D Logos

2014-06-27 Thread safety0ff via Digitalmars-d

On Thursday, 26 June 2014 at 08:15:35 UTC, Alix Pexton wrote:


Perhaps just a subtle clean up then?

https://drive.google.com/file/d/0B3i8FWPuOpryTjFybHNYYVVtc1k/edit

A...


I personally like the current one:
- I like how mars looks like its a reflection in the logo's 
background.

- I dislike how the moon mixes with the D

As for my opinion about your work, I'm inclined to agree with 
Wyatt's and H. S. Teoh's comments.


Re: std.math performance (SSE vs. real)

2014-06-27 Thread David Nadlinger via Digitalmars-d

On Friday, 27 June 2014 at 09:37:54 UTC, hane wrote:
On Friday, 27 June 2014 at 06:48:44 UTC, Iain Buclaw via 
Digitalmars-d wrote:

Can you test with this?

https://github.com/D-Programming-Language/phobos/pull/2274

Float and Double implementations of floor/ceil are trivial and 
I can add later.


Nice! I tested with the Perlin noise benchmark, and it got 
faster(in my environment, 1.030s - 0.848s).

But floor still consumes almost half of the execution time.


Wait, so DMD and GDC did actually emit a memcpy/… here? LDC 
doesn't, and the change didn't have much of an impact on 
performance.


What _does_ have a significant impact, however, is that the whole 
of floor() for doubles can be optimized down to

roundsd …,…,0x1
when targeting SSE 4.1, or
vroundsd …,…,…,0x1
when targeting AVX.

This is why std.math will need to build on top of 
compiler-recognizable primitives. Iain, Don, how do you think we 
should handle this? One option would be to build std.math based 
on an extended core.math with functions that are recognized as 
intrinsics or suitably implemented in the compiler-specific 
runtimes. The other option would be for me to submit LDC-specific 
implementations to Phobos.


Cheers,
David


Re: std.math performance (SSE vs. real)

2014-06-27 Thread Manu via Digitalmars-d
On 27 June 2014 11:31, David Nadlinger via Digitalmars-d
digitalmars-d@puremagic.com wrote:
 Hi all,

 right now, the use of std.math over core.stdc.math can cause a huge
 performance problem in typical floating point graphics code. An instance of
 this has recently been discussed here in the Perlin noise benchmark speed
 thread [1], where even LDC, which already beat DMD by a factor of two,
 generated code more than twice as slow as that by Clang and GCC. Here, the
 use of floor() causes trouble. [2]

 Besides the somewhat slow pure D implementations in std.math, the biggest
 problem is the fact that std.math almost exclusively uses reals in its API.
 When working with single- or double-precision floating point numbers, this
 is not only more data to shuffle around than necessary, but on x86_64
 requires the caller to transfer the arguments from the SSE registers onto
 the x87 stack and then convert the result back again. Needless to say, this
 is a serious performance hazard. In fact, this accounts for an 1.9x slowdown
 in the above benchmark with LDC.

 Because of this, I propose to add float and double overloads (at the very
 least the double ones) for all of the commonly used functions in std.math.
 This is unlikely to break much code, but:
  a) Somebody could rely on the fact that the calls effectively widen the
 calculation to 80 bits on x86 when using type deduction.
  b) Additional overloads make e.g. floor ambiguous without context, of
 course.

 What do you think?

 Cheers,
 David


 [1] http://forum.dlang.org/thread/lo19l7$n2a$1...@digitalmars.com
 [2] Fun fact: As the program happens only deal with positive numbers, the
 author could have just inserted an int-to-float cast, sidestepping the issue
 altogether. All the other language implementations have the floor() call
 too, though, so it doesn't matter for this discussion.

Totally agree.
Maintaining commitment to deprecated hardware which could be removed
from the silicone at any time is a bit of a problem looking forwards.
Regardless of the decision about whether overloads are created, at
very least, I'd suggest x64 should define real as double, since the
x87 is deprecated, and x64 ABI uses the SSE unit. It makes no sense at
all to use real under any general circumstances in x64 builds.

And aside from that, if you *think* you need real for precision, the
truth is, you probably have bigger problems.
Double already has massive precision. I find it's extremely rare to
have precision problems even with float under most normal usage
circumstances, assuming you are conscious of the relative magnitudes
of your terms.


Re: std.math performance (SSE vs. real)

2014-06-27 Thread David Nadlinger via Digitalmars-d

On Friday, 27 June 2014 at 09:37:54 UTC, hane wrote:
Nice! I tested with the Perlin noise benchmark, and it got 
faster(in my environment, 1.030s - 0.848s).

But floor still consumes almost half of the execution time.


Oh, and by the way, my optimized version (simply replace floor() 
in perlin_noise.d with a call to llvm_floor() from 
ldc.intrinsics) is 2.8x faster than the original one on my 
machine (both with -mcpu=native).


David


Re: std.math performance (SSE vs. real)

2014-06-27 Thread John Colvin via Digitalmars-d
On Friday, 27 June 2014 at 10:51:05 UTC, Manu via Digitalmars-d 
wrote:

On 27 June 2014 11:31, David Nadlinger via Digitalmars-d
digitalmars-d@puremagic.com wrote:

Hi all,

right now, the use of std.math over core.stdc.math can cause a 
huge
performance problem in typical floating point graphics code. 
An instance of
this has recently been discussed here in the Perlin noise 
benchmark speed
thread [1], where even LDC, which already beat DMD by a factor 
of two,
generated code more than twice as slow as that by Clang and 
GCC. Here, the

use of floor() causes trouble. [2]

Besides the somewhat slow pure D implementations in std.math, 
the biggest
problem is the fact that std.math almost exclusively uses 
reals in its API.
When working with single- or double-precision floating point 
numbers, this
is not only more data to shuffle around than necessary, but on 
x86_64
requires the caller to transfer the arguments from the SSE 
registers onto
the x87 stack and then convert the result back again. Needless 
to say, this
is a serious performance hazard. In fact, this accounts for an 
1.9x slowdown

in the above benchmark with LDC.

Because of this, I propose to add float and double overloads 
(at the very
least the double ones) for all of the commonly used functions 
in std.math.

This is unlikely to break much code, but:
 a) Somebody could rely on the fact that the calls effectively 
widen the

calculation to 80 bits on x86 when using type deduction.
 b) Additional overloads make e.g. floor ambiguous without 
context, of

course.

What do you think?

Cheers,
David


[1] http://forum.dlang.org/thread/lo19l7$n2a$1...@digitalmars.com
[2] Fun fact: As the program happens only deal with positive 
numbers, the
author could have just inserted an int-to-float cast, 
sidestepping the issue
altogether. All the other language implementations have the 
floor() call

too, though, so it doesn't matter for this discussion.


Totally agree.
Maintaining commitment to deprecated hardware which could be 
removed
from the silicone at any time is a bit of a problem looking 
forwards.
Regardless of the decision about whether overloads are created, 
at
very least, I'd suggest x64 should define real as double, since 
the
x87 is deprecated, and x64 ABI uses the SSE unit. It makes no 
sense at

all to use real under any general circumstances in x64 builds.

And aside from that, if you *think* you need real for 
precision, the

truth is, you probably have bigger problems.
Double already has massive precision. I find it's extremely 
rare to

have precision problems even with float under most normal usage
circumstances, assuming you are conscious of the relative 
magnitudes

of your terms.


I think real should stay how it is, as the largest 
hardware-supported floating point type on a system. What needs to 
change is dmd and phobos' default usage of real. Double should be 
the standard. People should be able to reach for real if they 
really need it, but normal D code should target the sweet spot 
that is double*.


I understand why the current situation exists. In 2000 x87 was 
the standard and the 80bit precision came for free.


*The number of algorithms that are both numerically 
stable/correct and benefit significantly from  64bit doubles is 
very small. The same can't be said for 32bit floats.


Re: std.math performance (SSE vs. real)

2014-06-27 Thread Remo via Digitalmars-d

On Friday, 27 June 2014 at 11:10:57 UTC, John Colvin wrote:
On Friday, 27 June 2014 at 10:51:05 UTC, Manu via Digitalmars-d 
wrote:

On 27 June 2014 11:31, David Nadlinger via Digitalmars-d
digitalmars-d@puremagic.com wrote:

Hi all,

right now, the use of std.math over core.stdc.math can cause 
a huge
performance problem in typical floating point graphics code. 
An instance of
this has recently been discussed here in the Perlin noise 
benchmark speed
thread [1], where even LDC, which already beat DMD by a 
factor of two,
generated code more than twice as slow as that by Clang and 
GCC. Here, the

use of floor() causes trouble. [2]

Besides the somewhat slow pure D implementations in std.math, 
the biggest
problem is the fact that std.math almost exclusively uses 
reals in its API.
When working with single- or double-precision floating point 
numbers, this
is not only more data to shuffle around than necessary, but 
on x86_64
requires the caller to transfer the arguments from the SSE 
registers onto
the x87 stack and then convert the result back again. 
Needless to say, this
is a serious performance hazard. In fact, this accounts for 
an 1.9x slowdown

in the above benchmark with LDC.

Because of this, I propose to add float and double overloads 
(at the very
least the double ones) for all of the commonly used functions 
in std.math.

This is unlikely to break much code, but:
a) Somebody could rely on the fact that the calls effectively 
widen the

calculation to 80 bits on x86 when using type deduction.
b) Additional overloads make e.g. floor ambiguous without 
context, of

course.

What do you think?

Cheers,
David


[1] http://forum.dlang.org/thread/lo19l7$n2a$1...@digitalmars.com
[2] Fun fact: As the program happens only deal with positive 
numbers, the
author could have just inserted an int-to-float cast, 
sidestepping the issue
altogether. All the other language implementations have the 
floor() call

too, though, so it doesn't matter for this discussion.


Totally agree.
Maintaining commitment to deprecated hardware which could be 
removed
from the silicone at any time is a bit of a problem looking 
forwards.
Regardless of the decision about whether overloads are 
created, at
very least, I'd suggest x64 should define real as double, 
since the
x87 is deprecated, and x64 ABI uses the SSE unit. It makes no 
sense at

all to use real under any general circumstances in x64 builds.

And aside from that, if you *think* you need real for 
precision, the

truth is, you probably have bigger problems.
Double already has massive precision. I find it's extremely 
rare to

have precision problems even with float under most normal usage
circumstances, assuming you are conscious of the relative 
magnitudes

of your terms.


I think real should stay how it is, as the largest 
hardware-supported floating point type on a system. What needs 
to change is dmd and phobos' default usage of real. Double 
should be the standard. People should be able to reach for real 
if they really need it, but normal D code should target the 
sweet spot that is double*.


I understand why the current situation exists. In 2000 x87 was 
the standard and the 80bit precision came for free.


*The number of algorithms that are both numerically 
stable/correct and benefit significantly from  64bit doubles 
is very small. The same can't be said for 32bit floats.



Totally agree!
Please add float and double overloads and make double default.
Sometimes float is just enough, but in most times double should 
be used.


If some one need more precision as double can provide then 80bit 
will probably be not enough any way.


IMHO intrinsics should be used as default if possible.




Re: std.math performance (SSE vs. real)

2014-06-27 Thread Russel Winder via Digitalmars-d
On Fri, 2014-06-27 at 11:10 +, John Colvin via Digitalmars-d wrote:
[…]
 I understand why the current situation exists. In 2000 x87 was 
 the standard and the 80bit precision came for free.

Real programmers have been using 128-bit floating point for decades. All
this namby-pamby 80-bit stuff is just an aberration and should never
have happened.

[…]

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder



Re: std.math performance (SSE vs. real)

2014-06-27 Thread Iain Buclaw via Digitalmars-d
On 27 June 2014 11:47, David Nadlinger via Digitalmars-d
digitalmars-d@puremagic.com wrote:
 On Friday, 27 June 2014 at 09:37:54 UTC, hane wrote:

 On Friday, 27 June 2014 at 06:48:44 UTC, Iain Buclaw via Digitalmars-d
 wrote:

 Can you test with this?

 https://github.com/D-Programming-Language/phobos/pull/2274

 Float and Double implementations of floor/ceil are trivial and I can add
 later.


 Nice! I tested with the Perlin noise benchmark, and it got faster(in my
 environment, 1.030s - 0.848s).
 But floor still consumes almost half of the execution time.


 Wait, so DMD and GDC did actually emit a memcpy/… here? LDC doesn't, and the
 change didn't have much of an impact on performance.


Yes, IIRC _d_arraycopy to be exact (so we loose doubly so!)


 What _does_ have a significant impact, however, is that the whole of floor()
 for doubles can be optimized down to
 roundsd …,…,0x1
 when targeting SSE 4.1, or
 vroundsd …,…,…,0x1
 when targeting AVX.

 This is why std.math will need to build on top of compiler-recognizable
 primitives. Iain, Don, how do you think we should handle this?

My opinion is that we should have never have pushed a variable sized
as the baseline for all floating point computations in the first
place.

But as we can't backtrace now, overloads will just have to do.  I
would welcome a DIP to add new core.math intrinsics that could be
proven to be useful for the sake of maintainability (and portability).

Regards
Iain



Re: A Perspective on D from game industry

2014-06-27 Thread Paulo Pinto via Digitalmars-d
On Friday, 27 June 2014 at 02:11:50 UTC, H. S. Teoh via 
Digitalmars-d wrote:
On Thu, Jun 26, 2014 at 09:16:27PM -0400, Nick Sabalausky via 
Digitalmars-d wrote:

[...]
Aye. Sometimes in embedded work, you're *lucky* if you can 
even do
printf at all, let alone a debugger. I've had to debug with as 
little

as one LED.  It's...umm...interesting. And time consuming.
Especially when it's ASM.  (But somewhat of a 
proud-yet-twisted rite

of passage though ;) )


Reminds me of time I hacked an old Apple II game's copy 
protection by
using a disk editor and writing in the instruction opcodes 
directly. :-)



There's other times I've had to get by without debuggers too. 
Like, in
the earlier days of web dev, it was common to not have a 
debugger. Or
debugging JS problems that only manifested on Safari (I assume 
Safari
probably has JS diagnostics/debugging now, but it didn't 
always. That

was a pain.)


Argh... you remind of times when I had to debug like 50kloc of
Javascript for a single typo on IE6, when IE6 has no debugger, 
not even
a JS error console, or anything whatsoever that might indicate 
something
went wrong except for a blank screen where there should be 
JS-rendered
content. It wasn't so bad when the same bug showed up in 
Firefox or
Opera, which do have sane debuggers; but when the bug is 
specific to IE,
it feels like shooting a gun blindfolded in pitch darkness and 
hoping

you'll hit bulls-eye by pure dumb luck.


T


IE6 had a debugger, it just wasn't installed by default.

You needed to install the debugger for Windows Scripting Host.

--
Paulo



Re: std.math performance (SSE vs. real)

2014-06-27 Thread dennis luehring via Digitalmars-d

Am 27.06.2014 14:20, schrieb Russel Winder via Digitalmars-d:

On Fri, 2014-06-27 at 11:10 +, John Colvin via Digitalmars-d wrote:
[
]

I understand why the current situation exists. In 2000 x87 was
the standard and the 80bit precision came for free.


Real programmers have been using 128-bit floating point for decades. All
this namby-pamby 80-bit stuff is just an aberration and should never
have happened.


what consumer hardware and compiler supports 128-bit floating points?



Re: std.math performance (SSE vs. real)

2014-06-27 Thread John Colvin via Digitalmars-d

On Friday, 27 June 2014 at 13:04:31 UTC, dennis luehring wrote:

Am 27.06.2014 14:20, schrieb Russel Winder via Digitalmars-d:
On Fri, 2014-06-27 at 11:10 +, John Colvin via 
Digitalmars-d wrote:

[
]

I understand why the current situation exists. In 2000 x87 was
the standard and the 80bit precision came for free.


Real programmers have been using 128-bit floating point for 
decades. All
this namby-pamby 80-bit stuff is just an aberration and should 
never

have happened.


what consumer hardware and compiler supports 128-bit floating 
points?


I think he was joking :)

No consumer hardware supports IEEE binary128 as far as I know. 
Wikipedia suggests that Sparc used to have some support.


Re: std.math performance (SSE vs. real)

2014-06-27 Thread Element 126 via Digitalmars-d

On 06/27/2014 03:04 PM, dennis luehring wrote:

Am 27.06.2014 14:20, schrieb Russel Winder via Digitalmars-d:

On Fri, 2014-06-27 at 11:10 +, John Colvin via Digitalmars-d wrote:
[
]

I understand why the current situation exists. In 2000 x87 was
the standard and the 80bit precision came for free.


Real programmers have been using 128-bit floating point for decades. All
this namby-pamby 80-bit stuff is just an aberration and should never
have happened.


what consumer hardware and compiler supports 128-bit floating points?



I noticed that std.math mentions partial support for big endian non-IEEE 
doubledouble. I first thought that it was a software implemetation like 
the QD library [1][2][3], but I could not find how to use it on x86_64.

It looks like it is only available for the PowerPC architecture.
Does anyone know about it ?

[1] http://crd-legacy.lbl.gov/~dhbailey/mpdist/
[2] 
http://web.mit.edu/tabbott/Public/quaddouble-debian/qd-2.3.4-old/docs/qd.pdf

[3] www.davidhbailey.com/dhbpapers/quad-double.pdf


Re: std.math performance (SSE vs. real)

2014-06-27 Thread Iain Buclaw via Digitalmars-d
On 27 June 2014 14:24, Element 126 via Digitalmars-d
digitalmars-d@puremagic.com wrote:
 On 06/27/2014 03:04 PM, dennis luehring wrote:

 Am 27.06.2014 14:20, schrieb Russel Winder via Digitalmars-d:

 On Fri, 2014-06-27 at 11:10 +, John Colvin via Digitalmars-d wrote:
 [ ]

 I understand why the current situation exists. In 2000 x87 was
 the standard and the 80bit precision came for free.


 Real programmers have been using 128-bit floating point for decades. All
 this namby-pamby 80-bit stuff is just an aberration and should never
 have happened.


 what consumer hardware and compiler supports 128-bit floating points?


 I noticed that std.math mentions partial support for big endian non-IEEE
 doubledouble. I first thought that it was a software implemetation like the
 QD library [1][2][3], but I could not find how to use it on x86_64.
 It looks like it is only available for the PowerPC architecture.
 Does anyone know about it ?


We only support native types in std.math.  And partial support is
saying more than what there actually is. :-)



Re: std.math performance (SSE vs. real)

2014-06-27 Thread Iain Buclaw via Digitalmars-d
On 27 June 2014 07:48, Iain Buclaw ibuc...@gdcproject.org wrote:
 On 27 June 2014 07:14, Iain Buclaw ibuc...@gdcproject.org wrote:
 On 27 June 2014 02:31, David Nadlinger via Digitalmars-d
 digitalmars-d@puremagic.com wrote:
 Hi all,

 right now, the use of std.math over core.stdc.math can cause a huge
 performance problem in typical floating point graphics code. An instance of
 this has recently been discussed here in the Perlin noise benchmark speed
 thread [1], where even LDC, which already beat DMD by a factor of two,
 generated code more than twice as slow as that by Clang and GCC. Here, the
 use of floor() causes trouble. [2]

 Besides the somewhat slow pure D implementations in std.math, the biggest
 problem is the fact that std.math almost exclusively uses reals in its API.
 When working with single- or double-precision floating point numbers, this
 is not only more data to shuffle around than necessary, but on x86_64
 requires the caller to transfer the arguments from the SSE registers onto
 the x87 stack and then convert the result back again. Needless to say, this
 is a serious performance hazard. In fact, this accounts for an 1.9x slowdown
 in the above benchmark with LDC.

 Because of this, I propose to add float and double overloads (at the very
 least the double ones) for all of the commonly used functions in std.math.
 This is unlikely to break much code, but:
  a) Somebody could rely on the fact that the calls effectively widen the
 calculation to 80 bits on x86 when using type deduction.
  b) Additional overloads make e.g. floor ambiguous without context, of
 course.

 What do you think?

 Cheers,
 David


 This is the reason why floor is slow, it has an array copy operation.

 ---
   auto vu = *cast(ushort[real.sizeof/2]*)(x);
 ---

 I didn't like it at the time I wrote, but at least it prevented the
 compiler (gdc) from removing all bit operations that followed.

 If there is an alternative to the above, then I'd imagine that would
 speed up floor by tenfold.


 Can you test with this?

 https://github.com/D-Programming-Language/phobos/pull/2274

 Float and Double implementations of floor/ceil are trivial and I can add 
 later.


Added float/double implementations.


Re: std.math performance (SSE vs. real)

2014-06-27 Thread Kai Nacke via Digitalmars-d
On Friday, 27 June 2014 at 13:50:29 UTC, Iain Buclaw via 
Digitalmars-d wrote:

On 27 June 2014 14:24, Element 126 via Digitalmars-d
digitalmars-d@puremagic.com wrote:

On 06/27/2014 03:04 PM, dennis luehring wrote:


Am 27.06.2014 14:20, schrieb Russel Winder via Digitalmars-d:


On Fri, 2014-06-27 at 11:10 +, John Colvin via 
Digitalmars-d wrote:

[ ]


I understand why the current situation exists. In 2000 x87 
was

the standard and the 80bit precision came for free.



Real programmers have been using 128-bit floating point for 
decades. All
this namby-pamby 80-bit stuff is just an aberration and 
should never

have happened.



what consumer hardware and compiler supports 128-bit floating 
points?




I noticed that std.math mentions partial support for big 
endian non-IEEE
doubledouble. I first thought that it was a software 
implemetation like the
QD library [1][2][3], but I could not find how to use it on 
x86_64.
It looks like it is only available for the PowerPC 
architecture.

Does anyone know about it ?



We only support native types in std.math.  And partial support 
is

saying more than what there actually is. :-)


The doubledouble type is available for PowerPC. In fact, I try to 
use this for my PowerPC64 port of LDC. The partial support here 
is a bit annoying but I did not find the time to implement the 
missing functions myself.


It is native in the sense that it is a supported type by gcc 
and xlc.


Regards,
Kai


Re: Bounty Increase on Issue #1325927

2014-06-27 Thread Iain Buclaw via Digitalmars-d
On 27 June 2014 10:42, Don via Digitalmars-d
digitalmars-d@puremagic.com wrote:

 I doubt bounties are effective as a motivation for this kind of thing.


+1


Re: Bounty Increase on Issue #1325927

2014-06-27 Thread Andrej Mitrovic via Digitalmars-d
That's a pretty big bounty though. I bet it would be motivating for
the jobless. :P

On 6/27/14, Iain Buclaw via Digitalmars-d digitalmars-d@puremagic.com wrote:
 On 27 June 2014 10:42, Don via Digitalmars-d
 digitalmars-d@puremagic.com wrote:

 I doubt bounties are effective as a motivation for this kind of thing.


 +1



Re: Bounty Increase on Issue #1325927

2014-06-27 Thread Etienne via Digitalmars-d

On 2014-06-27 5:53 AM, safety0ff wrote:

Perhaps the bounty won't stimulate anybody who doesn't have other
motivations to improve the situation, but more information about the
scope of the issue would be helpful to both backers and potential
claimants.


From what I've seen writing an ASN.1 compiler with D is, sometimes you 
just don't know if some part of the tree structure generated from the 
ctfe is referenced anywhere else (which is somewhat possible to track 
with reference counts). The garbage collector solves a potential 500+ 
hours of work making tree structures referentially self-aware.


My guess is that the compiler doesn't know if parts of the CTFE function 
will be used at runtime, no matter how obvious it is that it won't, 
there's just no information kept lying around about it and it gets 
confused with the tree structures used and sent to the backend for the 
runtime routines.




Re: Bounty Increase on Issue #1325927

2014-06-27 Thread Andrei Alexandrescu via Digitalmars-d

On 6/27/14, 8:54 AM, Andrej Mitrovic via Digitalmars-d wrote:

That's a pretty big bounty though. I bet it would be motivating for
the jobless. :P

On 6/27/14, Iain Buclaw via Digitalmars-d digitalmars-d@puremagic.com wrote:

On 27 June 2014 10:42, Don via Digitalmars-d
digitalmars-d@puremagic.com wrote:


I doubt bounties are effective as a motivation for this kind of thing.



+1


There are always students and un(der)employed people who have a passion 
for something, but need to mind other things to make ends meet. Bounties 
allow them to work on what they like and also make some money.


Facebook granted me some additional budget for bounties. I am looking 
for ideas on allocating it.



Andrei



Re: Few recent dmd pull requests

2014-06-27 Thread Jacob Carlborg via Digitalmars-d

On 2014-06-26 16:37, H. S. Teoh via Digitalmars-d wrote:


This is probably because without -D, the entire ddoc code doesn't even
run (which probably saves on compilation time), and comments are not
kept by the parser/lexer, so by the time the compiler evaluates
__traits(comment...), it doesn't know how to retrieve the comments
anymore.


__traits(getUnitTests) also depends on a compiler flag (-unittest).

--
/Jacob Carlborg


Re: Module level variable shadowing

2014-06-27 Thread Kapps via Digitalmars-d

On Friday, 27 June 2014 at 08:24:16 UTC, dennis luehring wrote:

Am 27.06.2014 10:20, schrieb dennis luehring:

I
think we hit the sweet spot at restricting shadowing detection 
to local scopes.


sweet does not mean - use a better name or .x to avoid manualy 
hard to detect problems - its like disabled shadow detection in 
local scopes


what i don't understand - why on earth should someone want to 
shadow a(or better any) variable at all?


struct Foo {
 int a;
 this(int a) {
 this.a = a;
 }
}


Re: std.math performance (SSE vs. real)

2014-06-27 Thread Kagamin via Digitalmars-d

On Friday, 27 June 2014 at 14:50:14 UTC, Kai Nacke wrote:
The doubledouble type is available for PowerPC. In fact, I try 
to use this for my PowerPC64 port of LDC. The partial support 
here is a bit annoying but I did not find the time to implement 
the missing functions myself.


It is native in the sense that it is a supported type by gcc 
and xlc.


Doesn't SSE2 effectively operate on double doubles too with 
instructions like addpd (and others *pd)?


Send file to printer in D language ( windows )

2014-06-27 Thread Alexandre via Digitalmars-d
I searched the internet, somehow sending documents (txt) to the 
printer, but not found, how can I do to send TXT files to the 
printer using the D language?


Re: Send file to printer in D language ( windows )

2014-06-27 Thread Kagamin via Digitalmars-d

http://msdn.microsoft.com/en-us/library/windows/desktop/dd162859%28v=vs.85%29.aspx


Re: Pair literal for D language

2014-06-27 Thread Dicebot via Digitalmars-d
On Friday, 27 June 2014 at 05:45:19 UTC, H. S. Teoh via 
Digitalmars-d wrote:
I agree, but that's what they're called in the compiler source 
code, so

it's kinda hard to call them something else.


Most people never look in compiler source code so lets pretend it 
does not exist ;) http://wiki.dlang.org/DIP54


Re: std.math performance (SSE vs. real)

2014-06-27 Thread Kagamin via Digitalmars-d
I think, make real==double on x86-64, like on other 
architectures, because double is the way to go.


Re: Pair literal for D language

2014-06-27 Thread H. S. Teoh via Digitalmars-d
On Fri, Jun 27, 2014 at 06:32:34PM +, Dicebot via Digitalmars-d wrote:
 On Friday, 27 June 2014 at 05:45:19 UTC, H. S. Teoh via Digitalmars-d wrote:
 I agree, but that's what they're called in the compiler source code,
 so it's kinda hard to call them something else.
 
 Most people never look in compiler source code so lets pretend it does
 not exist ;) http://wiki.dlang.org/DIP54

On the whole, I support it. But somebody needs to make the PR, otherwise
nothing will happen. ;-)


T

-- 
People tell me that I'm skeptical, but I don't believe it.


Re: Pair literal for D language

2014-06-27 Thread Dicebot via Digitalmars-d
On Friday, 27 June 2014 at 19:11:45 UTC, H. S. Teoh via 
Digitalmars-d wrote:
On Fri, Jun 27, 2014 at 06:32:34PM +, Dicebot via 
Digitalmars-d wrote:
On Friday, 27 June 2014 at 05:45:19 UTC, H. S. Teoh via 
Digitalmars-d wrote:
I agree, but that's what they're called in the compiler 
source code,

so it's kinda hard to call them something else.

Most people never look in compiler source code so lets pretend 
it does

not exist ;) http://wiki.dlang.org/DIP54


On the whole, I support it. But somebody needs to make the PR, 
otherwise

nothing will happen. ;-)


On my todo list, matter of prerequisites :( 
http://wiki.dlang.org/DIP63 is currently a blocker (and the thing 
I am working on right now), 
https://github.com/D-Programming-Language/dmd/pull/3651 is also 
very desirable. And merging each PR is a battle of its own.


Re: A Perspective on D from game industry

2014-06-27 Thread Nick Sabalausky via Digitalmars-d

On 6/26/2014 10:10 PM, H. S. Teoh via Digitalmars-d wrote:

On Thu, Jun 26, 2014 at 09:16:27PM -0400, Nick Sabalausky via Digitalmars-d 
wrote:
[...]

Aye. Sometimes in embedded work, you're *lucky* if you can even do
printf at all, let alone a debugger. I've had to debug with as little
as one LED.  It's...umm...interesting. And time consuming.
Especially when it's ASM.  (But somewhat of a proud-yet-twisted rite
of passage though ;) )


Reminds me of time I hacked an old Apple II game's copy protection by
using a disk editor and writing in the instruction opcodes directly. :-)



Cool. I once tried to hack a game I'd bought to change/remove the part 
where it took my name directly from the payment method and displayed 
that it was registered to Nicolas instead of Nick in big bold 
letters on the title screen. I didn't quite get that adjusted, but I did 
wind up with a tool (in D) to pack/unpack the game's resource file format.




Re: Few recent dmd pull requests

2014-06-27 Thread Kagamin via Digitalmars-d

On Thursday, 26 June 2014 at 10:38:54 UTC, bearophile wrot

https://github.com/D-Programming-Language/dmd/pull/3615

Will allow very handy, more DRY and less bug-prone like this:

// static array type
int[$]   a1 = [1,2];// int[2]
auto[$]  a2 = [3,4,5];  // int[3]
const[$] a3 = [6,7,8];  // const(int[3])

A comment by Walter:

My reservation on this is I keep thinking there must be a 
better way than [$].


It can share syntax with explicit array operations:
int[*] a1 = [1,2]; // int[2]
int[*] a2 = [3,4]; // int[2]
a1[*] = a2[*]; // copy a2 to a1


Re: A Perspective on D from game industry

2014-06-27 Thread H. S. Teoh via Digitalmars-d
On Fri, Jun 27, 2014 at 03:36:08PM -0400, Nick Sabalausky via Digitalmars-d 
wrote:
 On 6/26/2014 10:10 PM, H. S. Teoh via Digitalmars-d wrote:
 On Thu, Jun 26, 2014 at 09:16:27PM -0400, Nick Sabalausky via Digitalmars-d 
 wrote:
 [...]
 Aye. Sometimes in embedded work, you're *lucky* if you can even do
 printf at all, let alone a debugger. I've had to debug with as
 little as one LED.  It's...umm...interesting. And time consuming.
 Especially when it's ASM.  (But somewhat of a proud-yet-twisted rite
 of passage though ;) )
 
 Reminds me of time I hacked an old Apple II game's copy protection by
 using a disk editor and writing in the instruction opcodes directly.
 :-)
 
 
 Cool. I once tried to hack a game I'd bought to change/remove the part
 where it took my name directly from the payment method and displayed
 that it was registered to Nicolas instead of Nick in big bold
 letters on the title screen. I didn't quite get that adjusted, but I
 did wind up with a tool (in D) to pack/unpack the game's resource file
 format.

Heh, nice! :)

On another note, something more recent that I'm quite proud of, was to
fix a bug that I couldn't reproduce locally, for which the only
information I have was the segfault stacktrace the customer gave in the
bug report (which had no symbols resolved, btw, just raw hex addresses).
I looked up the exact firmware build number he was using, and got myself
a copy of the binary from the official release firmware FTP server. Of
course, that didn't have any symbols either (it's a release build), but
at least the addresses on the stacktrace matched up with the addresses
in the disassembly of the binary. So I had to check out the precise
revision of the source tree used to make that build from revision
control, build it with symbols, then match up the function addresses so
that I could identify them. However, the last few frames on the
stacktrace are static functions, which have no symbols in the binary
even in my build, so I had to trace through the stacktrace by comparing
the disassembly with the source code to find the offending function,
then find the offending line by tracing through the disassembly and
matching it up with the source code, up to the point of the segfault.
Once I found the exact source line, the register values on the
stacktrace indicated that it was a null dereference, so I worked
backwards, in the source code now, until I identified the exact variable
corresponding to the register that held the NULL pointer (the compiler's
optimizer shuffled the variable around between RAM and various registers
as the function progressed, so all of that had to be unravelled before
the exact variable could be identified). After that, I could resume the
regular routine of tracing the paths through which the NULL could have
come.

You have no idea how awesome it felt when my test image (which I
couldn't test locally since I couldn't reproduce the bug), installed on
the customer's backup test environment, worked the first time.


T

-- 
Claiming that your operating system is the best in the world because more 
people use it is like saying McDonalds makes the best food in the world. -- 
Carl B. Constantine


Re: Module level variable shadowing

2014-06-27 Thread Tofu Ninja via Digitalmars-d

On Friday, 27 June 2014 at 08:24:16 UTC, dennis luehring wrote:
what i don't understand - why on earth should someone want to 
shadow a(or better any) variable at all?


It can be useful if you are using mixins where you don't know 
what is going to be in the destination scope.


Re: Module level variable shadowing

2014-06-27 Thread Walter Bright via Digitalmars-d

On 6/27/2014 1:38 PM, Tofu Ninja wrote:

On Friday, 27 June 2014 at 08:24:16 UTC, dennis luehring wrote:

what i don't understand - why on earth should someone want to shadow a(or
better any) variable at all?


It can be useful if you are using mixins where you don't know what is going to
be in the destination scope.


Is true. People who do metaprogramming with C macros have all kinds of problems 
with the lack of scoping of temp variable names within the macros.


Re: std.math performance (SSE vs. real)

2014-06-27 Thread Element 126 via Digitalmars-d

On 06/27/2014 08:19 PM, Kagamin wrote:

On Friday, 27 June 2014 at 14:50:14 UTC, Kai Nacke wrote:

The doubledouble type is available for PowerPC. In fact, I try to use
this for my PowerPC64 port of LDC. The partial support here is a bit
annoying but I did not find the time to implement the missing
functions myself.

It is native in the sense that it is a supported type by gcc and xlc.


Doesn't SSE2 effectively operate on double doubles too with instructions
like addpd (and others *pd)?


I'm everything but an assembly guru (so please correct me if I'm wrong), 
but if my understanding is right, SSE2 only operates element-wise (at 
least for the operations you are mentionning).

For instance, if you operate on two double2 vectors (in pseudo-code) :
  c[] = a[] # b[]
where # is a supported binary operation, then the value of the first 
element of c only depends on the first elements of a and b.


The idea of double-double is that you operate on two doubles in such a 
way that if you concatenate the mantissas of both, then you 
effectively obtain the correct mathematical semantics of a quadruple 
precision floating point number, with a higher number of significant 
digits (~31 vs ~16 for double, in base 10).


I am not 100% sure yet, but I think that the idea is to simulate a 
floating point number with a 106 bit mantissa and a 12 bit exponent as

  x = s * ( m1 + m2 * 2^(-53) ) * 2^(e-b)
= s * m1 * 2^(e-b) + s * m2 * 2^(e-b-53)
where s is the sign bit (the same for both doubles), m1 and m2 the 
mantissas (including the implied 1 for normalized numbers), e the base-2 
exponent, b the common bias and 53 an extra bias for the low-order bits 
(I'm ignoring the denormalized numbers and the special values). The 
mantissa m1 of the first double gives the first 53 significant bits, and 
this of the second (m2) the extra 53 bits.


The addition is quite straightforward, but it gets tricky when 
implementing the other operations. The articles I mentionned in my 
previous post describe these operations for quadruple-doubles, 
achieving a ~62 digit precision (implemented in the QD library, but 
there is also a CUDA implemetation). It is completely overkill for most 
applications, but it can be useful for studying the convergence of 
numerical algorithms, and double-doubles can provide the extra precision 
needed in some simulations (or to compare the results with double 
precision).


It is also a comparatively faster alternative to arbitrary-precision 
floating-point libraries like GMP/MPFR, since it does not need to 
emulate every single digit, but instead takes advantage of the native 
double precision instructions. The downside is that you cannot get more 
significant bits than n*53, which is not suitable for computing the 
decimals of pi for instance.


To give you more details, I will need to study these papers more 
thoroughly. I am actually considering bringing double-double and 
quad-double software support to D, either by making a binding to QD, 
porting it or starting from scratch based on the papers. I don't know if 
it will succeed but it will be an interesting exercise anyway. I don't 
have a lot of time right now but I will try to start working on it in a 
few weeks. I'd really like to be able to use it with D. Having to 
rewrite an algorithm in C++ where I could only change one template 
argument in the main() can be quite painful :-)


typeid of an object whose static type is an interface returns the interface

2014-06-27 Thread Mark Isaacson via Digitalmars-d

If I have a variable whose static type is an interface and I call
typeid on it, I get the interface back, not the dynamic type.
This seems like confusing behavior. Is this the intended result?

I recognize that one needs some amount of state to perform the
dynamic type lookup, and so it is on that thought that a reason
for this might be based.

My workaround to this issue is shown in the code below, namely,
casting to Object before running typeid produces the expected
result. You can repro the issue by removing that cast.

import std.stdio;
import std.conv;

interface Base {

}

class Derived : Base {

}

void main() {
   Base b = new Derived();
   //1) As is, this prints Derived
   //2) Without the cast, this prints Base -- this is what is
unexpected
   //3) If Base is changed to a class instead of an interface,
this prints Derived regardless of whether or not the cast is in
place
   writeln(text(typeid(cast(Object) b)));
}


Re: Module level variable shadowing

2014-06-27 Thread Meta via Digitalmars-d

On Friday, 27 June 2014 at 20:40:24 UTC, Walter Bright wrote:

On 6/27/2014 1:38 PM, Tofu Ninja wrote:

On Friday, 27 June 2014 at 08:24:16 UTC, dennis luehring wrote:
what i don't understand - why on earth should someone want to 
shadow a(or

better any) variable at all?


It can be useful if you are using mixins where you don't know 
what is going to

be in the destination scope.


Is true. People who do metaprogramming with C macros have all 
kinds of problems with the lack of scoping of temp variable 
names within the macros.


But keep in mind that we can also name mixins. Or is that only 
possible when you're mixing in a template/mixin template?


Re: Pair literal for D language

2014-06-27 Thread Mason McGill via Digitalmars-d
I like DIP54 and I think the work on fixing tuples is awesome, 
but I have 1 nit-picky question: why is it called 
TemplateArgumentList when it's not always used as template 
arguments?


  void func(string, string) { }

  TypeTuple!(string, string) var;
  var[0] = I'm nobody's ;
  var[1] = template argument!;
  f(var);

Why not a name that emphasizes the entity's semantics, like 
StaticList/ExpandingList/StaticTuple/ExpandingTuple?


Re: Few recent dmd pull requests

2014-06-27 Thread Jonathan M Davis via Digitalmars-d
On Thursday, June 26, 2014 17:45:23 Meta via Digitalmars-d wrote:
 On Thursday, 26 June 2014 at 17:26:02 UTC, bearophile wrote:
  Meta:
  There has been discussion before about doing away with string
  lambdas. Maybe this is a good time to do that.
 
  If they get deprecated I will have to manually fix a _ton_ of
  code :-)
 
  Bye,
  bearophile

 I guess instead of deprecate, I guess I really mean just phase
 out. Undocument these templates and discourage their use, but
 don't actually deprecate them.

The major problem that still needs to be fixed with non-string lambdas is the
ability to compare them. Right now, as I understand it, the same non-string
lambda doesn't even result in the same template instantiation. String lambdas
don't have that problem.

- Jonathan M Davis



Re: Few recent dmd pull requests

2014-06-27 Thread H. S. Teoh via Digitalmars-d
On Fri, Jun 27, 2014 at 03:24:36PM -0700, Jonathan M Davis via Digitalmars-d 
wrote:
 On Thursday, June 26, 2014 17:45:23 Meta via Digitalmars-d wrote:
  On Thursday, 26 June 2014 at 17:26:02 UTC, bearophile wrote:
   Meta:
   There has been discussion before about doing away with string
   lambdas. Maybe this is a good time to do that.
  
   If they get deprecated I will have to manually fix a _ton_ of
   code :-)
  
   Bye,
   bearophile
 
  I guess instead of deprecate, I guess I really mean just phase
  out. Undocument these templates and discourage their use, but
  don't actually deprecate them.
 
 The major problem that still needs to be fixed with non-string lambdas
 is the ability to compare them. Right now, as I understand it, the
 same non-string lambda doesn't even result in the same template
 instantiation. String lambdas don't have that problem.
[...]

String lambda comparison is moot: ab and a  b do not compare
equal. But at least, calling find!ab multiple times will reuse the
same instantiation, whereas using lambdas will not. So at the very least
we need to fix lambda comparison so that identical lambdas will compare
equal.

Andrei talked about various schemes of lambda comparison before, and I
think the consensus was that some sort of hash function on the lambda's
AST would be most practical, and easiest to implement. I don't know if
any further progress has been made since then, though.


T

-- 
Life is unfair. Ask too much from it, and it may decide you don't deserve what 
you have now either.


Re: Few recent dmd pull requests

2014-06-27 Thread Meta via Digitalmars-d
On Friday, 27 June 2014 at 22:31:57 UTC, H. S. Teoh via 
Digitalmars-d wrote:
I don't know if any further progress has been made since then, 
though.


I've yet to see a pull request for it, so I'd assume that there 
hasn't.


Re: Pair literal for D language

2014-06-27 Thread Tofu Ninja via Digitalmars-d

On Friday, 27 June 2014 at 22:01:21 UTC, Mason McGill wrote:

StaticList/ExpandingList/StaticTuple/ExpandingTuple?


I think StaticList is the clearest, really makes it obvious what 
it is.


Re: Module level variable shadowing

2014-06-27 Thread dennis luehring via Digitalmars-d

Am 27.06.2014 22:38, schrieb Tofu Ninja:

On Friday, 27 June 2014 at 08:24:16 UTC, dennis luehring wrote:

what i don't understand - why on earth should someone want to
shadow a(or better any) variable at all?


It can be useful if you are using mixins where you don't know
what is going to be in the destination scope.



can be usefull in a even more hard to understand situation makes it no 
better


Re: Module level variable shadowing

2014-06-27 Thread dennis luehring via Digitalmars-d

Am 27.06.2014 20:09, schrieb Kapps:

On Friday, 27 June 2014 at 08:24:16 UTC, dennis luehring wrote:

Am 27.06.2014 10:20, schrieb dennis luehring:

I

think we hit the sweet spot at restricting shadowing detection
to local scopes.


sweet does not mean - use a better name or .x to avoid manualy
hard to detect problems - its like disabled shadow detection in
local scopes

what i don't understand - why on earth should someone want to
shadow a(or better any) variable at all?


struct Foo {
   int a;
   this(int a) {
   this.a = a;
   }
}



forgot that case - but i don't like how its currently handled, maybe no 
better way - its just not perfect :)


Re: Pair literal for D language

2014-06-27 Thread deadalnix via Digitalmars-d

On Saturday, 28 June 2014 at 03:01:12 UTC, Tofu Ninja wrote:

On Friday, 27 June 2014 at 22:01:21 UTC, Mason McGill wrote:

StaticList/ExpandingList/StaticTuple/ExpandingTuple?


I think StaticList is the clearest, really makes it obvious 
what it is.


Static already mean everything, please no.


Re: Module level variable shadowing

2014-06-27 Thread H. S. Teoh via Digitalmars-d
On Sat, Jun 28, 2014 at 06:37:08AM +0200, dennis luehring via Digitalmars-d 
wrote:
 Am 27.06.2014 20:09, schrieb Kapps:
[...]
 struct Foo {
int a;
this(int a) {
this.a = a;
}
 }
 
 
 forgot that case - but i don't like how its currently handled, maybe
 no better way - its just not perfect :)

Actually, this particular use case is very bad. It's just inviting
typos, for example, if you mistyped int a as int s, then you get:

struct Foo {
int a;
this(int s) {
this.a = a; // oops, now it means this.a = this.a
}
}

I used to like this shadowing trick, until one day I got bit by this
typo. From then on, I acquired a distaste for this kind of shadowing.
Not to mention, typos are only the beginning of troubles. If you copy a
few lines from the ctor into another method (e.g., to partially reset
the object state), then you end up with a similar unexpected rebinding
to this.a, etc..

Similar problems exist in nested functions:

auto myFunc(A...)(A args) {
int x;
int helperFunc(B...)(B args) {
int x = 1;
return x + args.length;
}
}

Accidentally mistype B args or int x=1, and again you get a silent
bug. This kind of shadowing is just a minefield of silent bugs waiting
to happen.

No thanks!


T

-- 
Designer clothes: how to cover less by paying more.


Re: Pair literal for D language

2014-06-27 Thread H. S. Teoh via Digitalmars-d
On Sat, Jun 28, 2014 at 05:00:09AM +, deadalnix via Digitalmars-d wrote:
 On Saturday, 28 June 2014 at 03:01:12 UTC, Tofu Ninja wrote:
 On Friday, 27 June 2014 at 22:01:21 UTC, Mason McGill wrote:
 StaticList/ExpandingList/StaticTuple/ExpandingTuple?
 
 I think StaticList is the clearest, really makes it obvious what it
 is.
 
 Static already mean everything, please no.

Yeah, static already means way too many things in D. Let's not
overload it anymore than it already is.

What about CTList? (CT for Compile-Time)


T

-- 
For every argument for something, there is always an equal and opposite 
argument against it. Debates don't give answers, only wounded or inflated egos.


Re: std.math performance (SSE vs. real)

2014-06-27 Thread Walter Bright via Digitalmars-d

On 6/27/2014 4:10 AM, John Colvin wrote:

*The number of algorithms that are both numerically stable/correct and benefit
significantly from  64bit doubles is very small.


To be blunt, baloney. I ran into these problems ALL THE TIME when doing 
professional numerical work.




Re: std.math performance (SSE vs. real)

2014-06-27 Thread Walter Bright via Digitalmars-d

On 6/27/2014 3:50 AM, Manu via Digitalmars-d wrote:

Totally agree.
Maintaining commitment to deprecated hardware which could be removed
from the silicone at any time is a bit of a problem looking forwards.
Regardless of the decision about whether overloads are created, at
very least, I'd suggest x64 should define real as double, since the
x87 is deprecated, and x64 ABI uses the SSE unit. It makes no sense at
all to use real under any general circumstances in x64 builds.

And aside from that, if you *think* you need real for precision, the
truth is, you probably have bigger problems.
Double already has massive precision. I find it's extremely rare to
have precision problems even with float under most normal usage
circumstances, assuming you are conscious of the relative magnitudes
of your terms.


That's a common perception of people who do not use the floating point unit for 
numerical work, and whose main concern is speed instead of accuracy.


I've done numerical floating point work. Two common cases where such precision 
matters:


1. numerical integration
2. inverting matrices

It's amazing how quickly precision gets overwhelmed and you get garbage answers. 
For example, when inverting a matrix with doubles, the results are garbage for 
larger than 14*14 matrices or so. There are techniques for dealing with this, 
but they are complex and difficult to implement.


Increasing the precision is the most straightforward way to deal with it.

Note that the 80 bit precision comes from W.F. Kahan, and he's no fool when 
dealing with these issues.


Another boring Boeing anecdote: calculators have around 10 digits of precision. 
A colleague of mine was doing a multi-step calculation, and rounded each step to 
2 decimal points. I told him he needed to keep the full 10 digits. He ridiculed 
me - but his final answer was off by a factor of 2. He could not understand why, 
and I'd explain, but he could never get how his 2 places past the decimal point 
did not work.


Do you think engineers like that will ever understand the problems with double 
precision, or have the remotest idea how to deal with them beyond increasing the 
precision? I don't.



 I find it's extremely rare to have precision problems even with float under 
most normal usage

 circumstances,

Then you aren't doing numerical work, because it happens right away.


Re: std.math performance (SSE vs. real)

2014-06-27 Thread Walter Bright via Digitalmars-d

On 6/27/2014 11:47 AM, Kagamin wrote:

I think, make real==double on x86-64, like on other architectures, because
double is the way to go.


No.

Consider also that on non-Windows platforms, such a decision would shut D out 
from accessing C code written using long doubles.


BTW, there's a reason Fortran is still king for numerical work - that's because 
C compiler devs typically do not understand floating point math and provide 
crappy imprecise math functions. I had an argument with a physics computation 
prof a year back who was gobsmacked when I told him the FreeBSD 80 bit math 
functions were only accurate to 64 bits. He told me he didn't believe me, that C 
wouldn't make such mistakes. I suggested he test it and see for himself :-)


They can and do. The history of C, including the C Standard, shows a lack of 
knowledge of how to do numerical math. For example, it was years and years 
before the Standard mentioned what the math functions should do with infinity 
arguments.


Things have gotten better in recent years, but I'd always intended that D out of 
the gate have proper support for fp, including fully accurate math functions. 
The reason D re-implements the math functions in Phobos rather than deferring to 
the C ones is the unreliability of the C ones.


Re: Precompiled binaries of DWT for windows?

2014-06-27 Thread pgtkda via Digitalmars-d-dwt

On Thursday, 26 June 2014 at 20:57:44 UTC, Jacob Carlborg wrote:

On 2014-06-26 10:19, pgtkda wrote:

Are there any precompiled binaries for windows?


Unfortunately no, there are no pre-compiled binaries. But it's 
very easy to build yourself, just follow the build instructions 
[1].


[1] https://github.com/d-widget-toolkit/dwt#building-1


Not so easy for me. Where do i have to type this?

$ git clone --recursive git://github.com/d-widget-toolkit/dwt.git


Re: Precompiled binaries of DWT for windows?

2014-06-27 Thread pgtkda via Digitalmars-d-dwt

On Friday, 27 June 2014 at 07:59:51 UTC, Jacob Carlborg wrote:

On 2014-06-27 09:51, pgtkda wrote:


Not so easy for me. Where do i have to type this?

$ git clone --recursive 
git://github.com/d-widget-toolkit/dwt.git


In a terminal/cmd (btw, you don't type $, that just an 
indication it should be typed in a terminal). Of course, this 
requires you to have git installed. Which I would recommend 
if you're doing any development with D.


Alternatively you can download a zip of the sources:

1. Download the DWT sources [1]
2. Extract the zip file
3. Download the DWT Win32 sources [2]
4. Extract the Win32 zip file into 
dwt\org.eclipse.swt.win32.win32.x86, where dwt is the path 
you extracted the DWT source code to


[1] https://github.com/d-widget-toolkit/dwt/archive/master.zip
[2] 
https://github.com/d-widget-toolkit/org.eclipse.swt.win32.win32.x86/archive/master.zip


Okay, thanks for your detailed answer. What should i do next if i 
extracted the Win32 zip file?




Re: Precompiled binaries of DWT for windows?

2014-06-27 Thread Jacob Carlborg via Digitalmars-d-dwt

On 2014-06-27 10:10, pgtkda wrote:


Okay, thanks for your detailed answer. What should i do next if i
extracted the Win32 zip file?


Follow the instructions here [1]. The steps I described above replaces 
the second step in the linked build instructions.


[1] https://github.com/d-widget-toolkit/dwt#building-1

--
/Jacob Carlborg


~ ?

2014-06-27 Thread pgtkda via Digitalmars-d-learn

What does this symbol mean in relation to D?

~


Enum type deduction inside templates is not working

2014-06-27 Thread Uranuz via Digitalmars-d-learn
Compiler can't deduce type for template struct Pair when using it 
with enum argument.  There is an example


import std.stdio;

enum Category { first, second, third };

struct Pair(F, S)
{
F first;
S second;

this(F f, S s)
{
first = f;
second = s;
}
}


void main()
{
auto p = Pair(Category.first, first); //It fails

writeln(p);
}

Is it not working for some reason or I'm doing something wrong or 
is it just lack of implementation? How I could make this working 
without explicit specifying of types?


Re: Enum type deduction inside templates is not working

2014-06-27 Thread pgtkda via Digitalmars-d-learn

On Friday, 27 June 2014 at 06:12:57 UTC, pgtkda wrote:
How I could make this

working without explicit specifying of types?


sorry, i should read better




Re: Enum type deduction inside templates is not working

2014-06-27 Thread pgtkda via Digitalmars-d-learn

On Friday, 27 June 2014 at 06:04:20 UTC, Uranuz wrote:
Compiler can't deduce type for template struct Pair when using 
it with enum argument.  There is an example


import std.stdio;

enum Category { first, second, third };

struct Pair(F, S)
{
F first;
S second;

this(F f, S s)
{
first = f;
second = s;
}
}


void main()
{
auto p = Pair(Category.first, first); //It fails

writeln(p);
}

Is it not working for some reason or I'm doing something wrong 
or is it just lack of implementation? How I could make this 
working without explicit specifying of types?


is this a solution for your problem?


enum Category { first, second, third };

struct Pair
{
Category cat;
string second;
this(Category cat, string second){
this.cat = cat, this.second = second;
}
}
void main(){
auto p = Pair(Category.first, first);
writeln(p);
}



Re: Enum type deduction inside templates is not working

2014-06-27 Thread Uranuz via Digitalmars-d-learn

On Friday, 27 June 2014 at 06:14:48 UTC, pgtkda wrote:

On Friday, 27 June 2014 at 06:12:57 UTC, pgtkda wrote:
How I could make this

working without explicit specifying of types?


sorry, i should read better


Ok. Maybe it was discussed already somewhere, but I am not god in 
searching in English. Is there any directions about it? How could 
I work around it? Should I mail some proposal or bug report for 
it?


Re: ~ ?

2014-06-27 Thread Ali Çehreli via Digitalmars-d-learn

On 06/26/2014 10:58 PM, pgtkda wrote:

What does this symbol mean in relation to D?

~


It can be used in two ways:

1) When used as a unary operator, it means bitwise complement:

assert(~0xaa55aa55 == 0x55aa55aa);

2) When used as a binary operator, it means concatenation:

assert(hello ~  world == hello world);

auto arr = [ 1, 2 ];
assert(arr ~ 3 == [ 1, 2, 3 ]);

When used with assignment, it means appending:

auto arr = [ 1, 2 ];
arr ~= 3;

assert(arr == [ 1, 2, 3 ]);

It can also be used in the special function name ~this(), which is the 
destructor of a struct or a class. (Related functions: 'static ~this()' 
and 'shared static ~this()')


Ali

[1] http://ddili.org/ders/d.en/bit_operations.html

[2] http://ddili.org/ders/d.en/arrays.html

[3] ddili.org/ders/d.en/special_functions.html



Re: Enum type deduction inside templates is not working

2014-06-27 Thread pgtkda via Digitalmars-d-learn

On Friday, 27 June 2014 at 06:21:11 UTC, Uranuz wrote:

On Friday, 27 June 2014 at 06:14:48 UTC, pgtkda wrote:

On Friday, 27 June 2014 at 06:12:57 UTC, pgtkda wrote:
How I could make this

working without explicit specifying of types?


sorry, i should read better


Ok. Maybe it was discussed already somewhere, but I am not god 
in searching in English. Is there any directions about it? How 
could I work around it? Should I mail some proposal or bug 
report for it?


I think, D is a typesafe language, therefore you can't use 
variables with no type declaration.


One thing you can search for, are templates but even there you 
have to define a type:


import std.stdio;

enum Category : string { first = first}

template Pair(T)
{
T t;
T cat;
}


void main()
{
alias Pair!(string) a;
a.cat = Category.first;
a.t = first;

writeln(a.cat,  . , a.t);
}


Re: ~ ?

2014-06-27 Thread pgtkda via Digitalmars-d-learn

On Friday, 27 June 2014 at 06:33:07 UTC, Ali Çehreli wrote:

On 06/26/2014 10:58 PM, pgtkda wrote:

What does this symbol mean in relation to D?

~


It can be used in two ways:

1) When used as a unary operator, it means bitwise complement:

assert(~0xaa55aa55 == 0x55aa55aa);

2) When used as a binary operator, it means concatenation:

assert(hello ~  world == hello world);

auto arr = [ 1, 2 ];
assert(arr ~ 3 == [ 1, 2, 3 ]);

When used with assignment, it means appending:

auto arr = [ 1, 2 ];
arr ~= 3;

assert(arr == [ 1, 2, 3 ]);

It can also be used in the special function name ~this(), which 
is the destructor of a struct or a class. (Related functions: 
'static ~this()' and 'shared static ~this()')


Ali

[1] http://ddili.org/ders/d.en/bit_operations.html

[2] http://ddili.org/ders/d.en/arrays.html

[3] ddili.org/ders/d.en/special_functions.html


Thanks :)


GC.calloc(), then what?

2014-06-27 Thread Ali Çehreli via Digitalmars-d-learn
1) After allocating memory by GC.calloc() to place objects on it, what 
else should one do? In what situations does one need to call addRoot() 
or addRange()?


2) Does the answer to the previous question differ for struct objects 
versus class objects?


3) Is there a difference between core.stdc.stdlib.calloc() and 
GC.calloc() in that regard? Which one to use in what situation?


4) Are the random bit patterns in a malloc()'ed memory always a concern 
for false pointers? Does that become a concern after calling addRoot() 
or addRange()? If so, why would anyone ever malloc() instead of always 
calloc()'ing?


Ali


Re: Enum type deduction inside templates is not working

2014-06-27 Thread Uranuz via Digitalmars-d-learn
I think, D is a typesafe language, therefore you can't use 
variables with no type declaration.


One thing you can search for, are templates but even there you 
have to define a type:


import std.stdio;

enum Category : string { first = first}

template Pair(T)
{
T t;
T cat;
}


void main()
{
alias Pair!(string) a;
a.cat = Category.first;
a.t = first;

writeln(a.cat,  . , a.t);
}


Ok. I know that D is typesafe language, but I'm not going to do 
some implicit type casts in there, because type of Category.first 
is Category itself but not string or something. In this example 
`a.cat = Category.first;` tries to make implicit cast (I don't 
remember is it allowed or not)


Re: GC.calloc(), then what?

2014-06-27 Thread safety0ff via Digitalmars-d-learn

On Friday, 27 June 2014 at 07:03:28 UTC, Ali Çehreli wrote:
1) After allocating memory by GC.calloc() to place objects on 
it, what else should one do?


Use std.conv.emplace.

In what situations does one need to call addRoot() or 
addRange()?


Add root creates an internal reference within the GC to the 
memory pointed by the argument (void* p.)
This pins the memory so that it won't be collected by the GC. 
E.g. you're going to pass a string to an extern C function, and 
the function will store a pointer to the string within its own 
data structures. Since the GC won't have access to the data 
structures, you must addRoot it to avoid creating a dangling 
pointer in the C data structure.


Add range is usually for cases when you use 
stdc.stdlib.malloc/calloc and place pointers to GC managed memory 
within that memory. This allows the GC to scan that memory for 
pointers during collection, otherwise it may reclaim memory which 
is pointed to my malloc'd memory.


2) Does the answer to the previous question differ for struct 
objects versus class objects?


No.

3) Is there a difference between core.stdc.stdlib.calloc() and 
GC.calloc() in that regard? Which one to use in what situation?


One is GC managed, the other is not. calloc simply means the 
memory is pre-zero'd, it has nothing to do with C allocation / 
allocation in the C language


4) Are the random bit patterns in a malloc()'ed memory always a 
concern for false pointers? Does that become a concern after 
calling addRoot() or addRange()?


If by malloc you're talking about stdc.stdlib.malloc then:
It only becomes a concern after you call addRange, and the false 
pointers potential is only present within the range you gave to 
addRange.
So if you over-allocate using malloc and give the entire memory 
range to addRange, then any false pointers in the un-intialized 
portion become a concern.


If you're talking about GC.malloc():
Currently the GC zeros the memory unless you allocate NO_SCAN 
memory, so it only differs in the NO_SCAN case.


If so, why would anyone ever malloc() instead of always 
calloc()'ing?


To save on redundant zero'ing.


Re: Enum type deduction inside templates is not working

2014-06-27 Thread Uranuz via Digitalmars-d-learn
Seems that I found answer myself. As far as I understand type 
inference is working only for template functions but not struct 
or class templates. This is why this not working and enum is not 
responsible for that.


I don't know why I use D enough long but I did not remember this 
fact.


Re: GC.calloc(), then what?

2014-06-27 Thread safety0ff via Digitalmars-d-learn
I realize that my answer isn't completely clear in some cases, if 
you still have questions, ask away.


Re: Enum type deduction inside templates is not working

2014-06-27 Thread Uranuz via Digitalmars-d-learn

There is proposal exists for this topic
http://wiki.dlang.org/DIP40


Re: GC.calloc(), then what?

2014-06-27 Thread Ali Çehreli via Digitalmars-d-learn

On 06/27/2014 12:53 AM, safety0ff wrote:

I realize that my answer isn't completely clear in some cases, if you
still have questions, ask away.


Done! That's why we are here anyway. :p

Ali



Re: GC.calloc(), then what?

2014-06-27 Thread Ali Çehreli via Digitalmars-d-learn

Thank you for your responses. I am partly enlightened. :p

On 06/27/2014 12:34 AM, safety0ff wrote:

 On Friday, 27 June 2014 at 07:03:28 UTC, Ali Çehreli wrote:
 1) After allocating memory by GC.calloc() to place objects on it, what
 else should one do?

 Use std.conv.emplace.

That much I know. :) I have actually finished the first draft of 
translating my memory management chapter (the last one in the book!) and 
trying to make sure that the information is correct.


 In what situations does one need to call addRoot() or addRange()?

 Add root creates an internal reference within the GC to the memory
 pointed by the argument (void* p.)
 This pins the memory so that it won't be collected by the GC. E.g.
 you're going to pass a string to an extern C function, and the function
 will store a pointer to the string within its own data structures. Since
 the GC won't have access to the data structures, you must addRoot it to
 avoid creating a dangling pointer in the C data structure.

Additionally and according to the documentation, any other GC blocks 
will be considered live. So, addRoot makes a true roots where the GC 
starts its scanning from.


 Add range is usually for cases when you use stdc.stdlib.malloc/calloc
 and place pointers to GC managed memory within that memory. This allows
 the GC to scan that memory for pointers during collection, otherwise it
 may reclaim memory which is pointed to my malloc'd memory.

One part that I don't understand in the documentation is if p points 
into a GC-managed memory block, addRange does not mark this block as live.


  http://dlang.org/phobos/core_memory.html#.GC.addRange

Does that mean that if I have objects in my addRange'd memory that in 
turn have references to objects in the GC-managed memory, my references 
in my memory may be stale?


If so, does that mean that if I manage objects in my memory, all their 
members should be managed by me as well?


This seems to bring two types of GC-managed memory:

1) addRoot'ed memory that gets scanned deep (references are followed)

2) addRange'd memory that gets scanned shallow (references are not followed)

See, that's confusing: What does that mean? I still hold the memory 
block anyway; what does the GC achieve by scanning my memory if it's not 
going to follow references anyway?


 2) Does the answer to the previous question differ for struct objects
 versus class objects?

 No.

 3) Is there a difference between core.stdc.stdlib.calloc() and
 GC.calloc() in that regard? Which one to use in what situation?

 One is GC managed, the other is not. calloc simply means the memory is
 pre-zero'd, it has nothing to do with C allocation / allocation in
 the C language

I know even that much. ;) I find people's malloc+memset code amusing.

 4) Are the random bit patterns in a malloc()'ed memory always a
 concern for false pointers? Does that become a concern after calling
 addRoot() or addRange()?

 If by malloc you're talking about stdc.stdlib.malloc then:
 It only becomes a concern after you call addRange,

But addRange doesn't seem to make sense for stdlib.malloc'ed memory, 
right? The reason is, that memory is not managed by the GC so there is 
no danger of losing that memory due to a collection anyway. It will go 
away only when I call stdlib.free.


 and the false
 pointers potential is only present within the range you gave to addRange.
 So if you over-allocate using malloc and give the entire memory range to
 addRange, then any false pointers in the un-intialized portion become a
 concern.

Repeating myself, that makes sense but I don't see when I would need 
addRange on a stdlib.malloc'ed memory.


 If you're talking about GC.malloc():
 Currently the GC zeros the memory unless you allocate NO_SCAN memory, so
 it only differs in the NO_SCAN case.

So, the GC's default behavior is to scan the memory, necessitating 
clearing the contents? That seems to make GC.malloc() behave the same as 
GC.calloc() by default, doesn't it?


So, is this guideline right?

  GC.malloc() makes sense only with NO_SCAN.

 If so, why would anyone ever malloc() instead of always calloc()'ing?

 To save on redundant zero'ing.

And again, redundant zero'ing is saved only when used with NO_SCAN.

I think I finally understand the main difference between stdlib.malloc 
and GC.malloc: The latter gets collected by the GC.


Another question: Do GC.malloc'ed and GC.calloc'ed memory scanned deep?

Ali



Re: GC.calloc(), then what?

2014-06-27 Thread safety0ff via Digitalmars-d-learn

On Friday, 27 June 2014 at 08:17:07 UTC, Ali Çehreli wrote:

Thank you for your responses. I am partly enlightened. :p


I know you're a knowledgeable person in the D community, I may 
have stated many things you already knew, but I tried to answer 
the questions as-is.




On 06/27/2014 12:34 AM, safety0ff wrote:

 Add range is usually for cases when you use
stdc.stdlib.malloc/calloc
 and place pointers to GC managed memory within that memory.
This allows
 the GC to scan that memory for pointers during collection,
otherwise it
 may reclaim memory which is pointed to my malloc'd memory.

One part that I don't understand in the documentation is if p 
points into a GC-managed memory block, addRange does not mark 
this block as live.


[SNIP]

See, that's confusing: What does that mean? I still hold the 
memory block anyway; what does the GC achieve by scanning my 
memory if it's not going to follow references anyway?


The GC _will_ follow references (i.e. scan deeply,) that's the 
whole point of addRange.

What that documentation is saying is that:

If you pass a range R through addRange, and R lies in the GC 
heap, then once there are no pointers (roots) to R, the GC will 
collect it anyway regardless that you called addRange on it.


In other words, prefer using addRoot for GC memory and addRange 
for non-GC memory.




 4) Are the random bit patterns in a malloc()'ed memory
always a
 concern for false pointers? Does that become a concern after
calling
 addRoot() or addRange()?

 If by malloc you're talking about stdc.stdlib.malloc then:
 It only becomes a concern after you call addRange,

But addRange doesn't seem to make sense for stdlib.malloc'ed 
memory, right? The reason is, that memory is not managed by the 
GC so there is no danger of losing that memory due to a 
collection anyway. It will go away only when I call stdlib.free.


addRange almost exclusively makes sense with stdlib.malloc'ed 
memory.
As you've stated: If you pass it GC memory it does not mark the 
block as live.


I believe the answer above clears things up: the GC does scan the 
range, and scanning is always deep (i.e. when it finds pointers 
to unmarked GC memory, it marks them.)


Conversely, addRoot exclusively makes sense with GC memory.


 If you're talking about GC.malloc():
 Currently the GC zeros the memory unless you allocate NO_SCAN
memory, so
 it only differs in the NO_SCAN case.

So, the GC's default behavior is to scan the memory, 
necessitating clearing the contents? That seems to make 
GC.malloc() behave the same as GC.calloc() by default, doesn't 
it?



I don't believe it's necessary to clear it, it's just a measure 
against false pointers (AFAIK.)




So, is this guideline right?

  GC.malloc() makes sense only with NO_SCAN.



I wouldn't make a guideline like that, just say that: if you want 
the memory to be guaranteed to be zero'd use GC.calloc.


However, due to GC internals (for preventing false pointers,) 
GC.malloc'd memory  will often be zero'd anyway.



 If so, why would anyone ever malloc() instead of always
calloc()'ing?

 To save on redundant zero'ing.

And again, redundant zero'ing is saved only when used with 
NO_SCAN.


Yup.

I think I finally understand the main difference between 
stdlib.malloc and GC.malloc: The latter gets collected by the 
GC.


Yup.

Another question: Do GC.malloc'ed and GC.calloc'ed memory 
scanned deep?


Yes, only NO_SCAN memory doesn't get scanned, everything else 
does.




Re: GC.calloc(), then what?

2014-06-27 Thread safety0ff via Digitalmars-d-learn

On Friday, 27 June 2014 at 08:17:07 UTC, Ali Çehreli wrote:


So, the GC's default behavior is to scan the memory, 
necessitating clearing the contents? That seems to make 
GC.malloc() behave the same as GC.calloc() by default, doesn't 
it?


Yes.
compare:
https://github.com/D-Programming-Language/druntime/blob/master/src/gc/gc.d#L543
to:
https://github.com/D-Programming-Language/druntime/blob/master/src/gc/gc.d#L419


Re: GC.calloc(), then what?

2014-06-27 Thread safety0ff via Digitalmars-d-learn

On Friday, 27 June 2014 at 09:20:53 UTC, safety0ff wrote:

Yes.
compare:
https://github.com/D-Programming-Language/druntime/blob/master/src/gc/gc.d#L543
to:
https://github.com/D-Programming-Language/druntime/blob/master/src/gc/gc.d#L419


Actually, I just realized that I was wrong in saying the memory 
likely be cleared by malloc it's only the overallocation that 
gets cleared.


  1   2   >