Re: Tkd - Cross platform GUI toolkit based on Tcl/Tk

2014-06-20 Thread Kagamin via Digitalmars-d-announce

On Thursday, 19 June 2014 at 19:24:15 UTC, SomeRiz wrote:

Visual Studio like editor for TkD :/


Hmm... visual designers can usually build pixel-oriented GUI, tk 
uses layouts, which work with code a little better.


Re: core.checkedint added to druntime

2014-06-20 Thread David Nadlinger via Digitalmars-d-announce

On Thursday, 19 June 2014 at 03:42:11 UTC, David Bregman wrote:
I think the mulu implementation is incorrect. There can be an 
overflow even when r = 0. For example consider the int version 
with x = y = 116.


I also noticed this; another easy counter-example would be 132 
for ulong multiplication.


Filed as: https://issues.dlang.org/show_bug.cgi?id=12958


Re: Lang.NEXT panel (dfix)

2014-06-20 Thread Stefan Koch via Digitalmars-d-announce

On Thursday, 19 June 2014 at 21:28:28 UTC, Brian Schott wrote:

On Thursday, 19 June 2014 at 20:37:48 UTC, Stefan Koch wrote:
hmm well all string-mixins life at compile-time. So one can 
print them out at runtime. Dump the source and put it into the 
AST. Same for the results of static if, and the like.


I imagine that trying to create an automated refactoring tool 
for D is a bit like parsing HTML with regex.


http://stackoverflow.com/questions/1732348/regex-match-open-tags-except-xhtml-self-contained-tags


A hypothetical dfix-tool has a diffrent scope compared to a 
compiler.
Every sufficiently complex tranformation is very hard to do 
automaticlly.
My goal is just to make simple tasks simple. I hope superficial 
understanding of D's AST is enough for that.




Re: Interview at Lang.NEXT

2014-06-20 Thread Bruno Medeiros via Digitalmars-d-announce

On 17/06/2014 07:21, Jacob Carlborg wrote:

On 16/06/14 16:00, Bruno Medeiros wrote:


I sometimes tried to convince dynamic language proponents - the ones
that write unittests at least - of the benefits of static typing, by
stating that static typing is really just compile time unit-tests! (it
is actually)


You can actually do compile time unit tests in D, that is not the type
system. I.e. unit tests for CTFE functions that runs at compile time.
Pretty cool actually :)



I know, pretty cool yeah. But specific to D, I was talking about static 
typing in general.


--
Bruno Medeiros
https://twitter.com/brunodomedeiros


Re: Lang.NEXT panel (dfix)

2014-06-20 Thread Bruno Medeiros via Digitalmars-d-announce

On 17/06/2014 20:59, Dicebot wrote:

On Tuesday, 17 June 2014 at 19:48:42 UTC, Bruno Medeiros wrote:

On 17/06/2014 19:10, deadalnix wrote:

On Tuesday, 17 June 2014 at 15:45:55 UTC, Bruno Medeiros wrote:


Dunno about DScanner, but if it's being used in DCD, I'd guess it can
handle the whole language, or be fairly close to it.

Similarly, there is also DParser2 from MonoD and the DDT parser (for
the tool I'm working on)



HAHAHAHAHAHA ! (The author of these actual tools will tell you the
same).



I don't understand what point is it you're trying to say here...
Are you saying it's ludicrous that people have written complete
parsers for D?


Parsing D is relatively simple but making any reliable changes without
full (and mean _full_) semantic analysis is close to impossible because
of code generation and interleaving semantic stages.


A lot of simple changes could be made with little or no semantic 
analysis. I'm not talking about complex refactorings such as 
Extract/Inline Function, Introduce/Remove Parameter, Pull Method 
Up/Down, extract Class/Interface, etc.


Rather, simple fix changes that would be useful if the API or syntax of 
the language changes. That's why I asked for examples of dfix changes 
(even if for hypothetical language changes) - to see how easily they 
could be implemented or not.


--
Bruno Medeiros
https://twitter.com/brunodomedeiros


Re: Lang.NEXT panel (dfix)

2014-06-20 Thread Dicebot via Digitalmars-d-announce

On Friday, 20 June 2014 at 13:04:23 UTC, Bruno Medeiros wrote:
Rather, simple fix changes that would be useful if the API or 
syntax of the language changes. That's why I asked for examples 
of dfix changes (even if for hypothetical language changes) - 
to see how easily they could be implemented or not.


Well I guess most recent example is that `final` by default 
proposal - marking all existing functions as virtual ones 
explicitly.


Problem with dfix is that such tool can't afford to be best 
effort implementation if it is to be used as justification for 
breaking changes. It needs to provide guaranteed 0-cost 
transition or someone will be inevitably unhappy about the 
breakage anyway :(


Re: Tkd - Cross platform GUI toolkit based on Tcl/Tk

2014-06-20 Thread Nick Sabalausky via Digitalmars-d-announce

On 6/20/2014 4:37 AM, Kagamin wrote:

On Thursday, 19 June 2014 at 19:24:15 UTC, SomeRiz wrote:

Visual Studio like editor for TkD :/


Hmm... visual designers can usually build pixel-oriented GUI, tk uses
layouts, which work with code a little better.


While it's been awhile since I've used visual GUI designers much, I seem 
to remember them (at least the better ones anyway) being perfectly 
capable of doing resizable layouts. Any limitations seemed to have more 
to do with the widgets and GUI libs themselves rather than any inherent 
drawback to GUI designers in general. I seem to recall doing some 
resizable layouts even as far back as VB3.


Re: hap.random: a new random number library for D

2014-06-20 Thread Nick Sabalausky via Digitalmars-d-announce

On 6/19/2014 5:27 PM, Joseph Rushton Wakeling wrote:


I realized that it ought to be possible to allow a more direct drop-in
replacement for std.random by adding static opCalls to the classes which
were previously structs.

Thoughts on this, in favour, against ... ?


I'm on the fence:

Pro: Upgrade paths and backwards compatibility are great, especially for 
Phobos.


Con: If any semantics are changed (default ref/value passing is the only 
one that comes to mind), then maybe it would mask potential upgrade 
issues. Breakage would force users to notice the change and (hopefully) 
deal with it appropriately.


I don't personally see it as a big deal either way, though.



Re: Tkd - Cross platform GUI toolkit based on Tcl/Tk

2014-06-20 Thread Jacob Carlborg via Digitalmars-d-announce

On 2014-06-19 20:47, SomeRiz wrote:

Thanks Gary.

Very simple :)

But i have a question.

All DLL file = How can i embed main.d file?


Use DWT [1], no additional requirements besides the system libraries ;)

[1] https://github.com/d-widget-toolkit/dwt

--
/Jacob Carlborg


Re: DConf Day 1 Talk 6: Case Studies in Simplifying Code with Compile-Time Reflection by Atila Neves

2014-06-20 Thread Jacob Carlborg via Digitalmars-d-announce

On 2014-06-19 14:16, Joakim wrote:


Sorry, I just noticed that you were only talking about HD quality.  I
don't know where you're getting the 350 MB figure, as all the HD
recordings on archive.org are about 6-800 GB, but yeah, file sizes will
vary based on the type of HD resolution and encoding used.  I wouldn't
call any hour-long video encoded into 350 MB HD quality though, as
it's likely so compressed as to look muddy.


If I recall correctly, this talk, uploaded to youtube by Dicebot, was 
around 350 MB, HD quality.


--
/Jacob Carlborg


Re: DConf Day 1 Talk 6: Case Studies in Simplifying Code with Compile-Time Reflection by Atila Neves

2014-06-20 Thread Dicebot via Digitalmars-d-announce

On Friday, 20 June 2014 at 21:44:16 UTC, Jacob Carlborg wrote:

On 2014-06-19 14:16, Joakim wrote:

Sorry, I just noticed that you were only talking about HD 
quality.  I
don't know where you're getting the 350 MB figure, as all the 
HD
recordings on archive.org are about 6-800 GB, but yeah, file 
sizes will
vary based on the type of HD resolution and encoding used.  I 
wouldn't
call any hour-long video encoded into 350 MB HD quality 
though, as

it's likely so compressed as to look muddy.


If I recall correctly, this talk, uploaded to youtube by 
Dicebot, was around 350 MB, HD quality.


I always upload highest quality available on archive.org (634.3 
MB for this one), YouTube re-encoding must be pretty good :)


Re: DConf Day 1 Talk 6: Case Studies in Simplifying Code with Compile-Time Reflection by Atila Neves

2014-06-20 Thread Andrei Alexandrescu via Digitalmars-d-announce

On 6/19/14, 5:16 AM, Joakim wrote:

On Thursday, 19 June 2014 at 11:04:25 UTC, Jacob Carlborg wrote:

My connection is specified to 10 Mbps. But it depends on how large
the files are. Most of the files from DConf are under around 350MB in
HD quality. On the other hand, Andrei's talk from LangNext 2014 is
1.3 GB and 48 minutes long while the talk by Bjarne is 2.8 GB and 68
minutes long.


There are also 740 and 65.8 MB encodings of Andrei's talk that are
perfectly usable.  I should know, as I downloaded the latter.
 Same for Bjarne's talk, which I haven't downloaded.


Sorry, I just noticed that you were only talking about HD quality.  I
don't know where you're getting the 350 MB figure, as all the HD
recordings on archive.org are about 6-800 GB, but yeah, file sizes will
vary based on the type of HD resolution and encoding used.  I wouldn't
call any hour-long video encoded into 350 MB HD quality though, as
it's likely so compressed as to look muddy.


I use archive.org because it's the only I found that accepts 
full-resolution videos. -- Andrei


ANTLR grammar for D?

2014-06-20 Thread Wesley Hamilton via Digitalmars-d
I've started making a D grammar for ANTLR4, but I didn't want to 
spend days testing and debugging it later if somebody already has 
one.


The best search results turn up posts that are 10 years old. Only 
one post has a link to a grammar file and the page seems to have 
been removed. I also assume it would be obsolete with changes to 
ANTLR and D.

http://www.digitalmars.com/d/archives/digitalmars/D/25302.html
http://www.digitalmars.com/d/archives/digitalmars/D/4953.html


Set-up timeouts on thread-related unittests

2014-06-20 Thread Iain Buclaw via Digitalmars-d

Hi,

I've been seeing a problem on the Debian X32 build system where 
unittest process just hangs, and require manual intervention by 
the poor maintainer to kill the process manually before the build 
fails due to inactivity.


Haven't yet managed to reduce the problem (it only happens on a 
native X32 system, but not when running X32 under native x86_64), 
but thought it would be a good idea to suggest that any thread 
related tests should be safely handled by self terminating after 
a period of waiting.


Thoughts from the phobos maintainers?

Regards
Iain


Re: ANTLR grammar for D?

2014-06-20 Thread dennis luehring via Digitalmars-d

Am 20.06.2014 08:57, schrieb Wesley Hamilton:

I've started making a D grammar for ANTLR4, but I didn't want to
spend days testing and debugging it later if somebody already has
one.

The best search results turn up posts that are 10 years old. Only
one post has a link to a grammar file and the page seems to have
been removed. I also assume it would be obsolete with changes to
ANTLR and D.
http://www.digitalmars.com/d/archives/digitalmars/D/25302.html
http://www.digitalmars.com/d/archives/digitalmars/D/4953.html



most uptodate seems to be https://github.com/Hackerpilot/DGrammar


Re: RFC: Value range propagation for if-else

2014-06-20 Thread Don via Digitalmars-d

On Wednesday, 18 June 2014 at 06:40:21 UTC, Lionello Lunesu wrote:

Hi,



https://github.com/lionello/dmd/compare/if-else-range

There, I've also added a __traits(intrange, expression) which 
returns a tuple with the min and max for the given expression.



Destroy?


The compiler uses value range propagation in this {min, max} 
form, but I think that's an implementation detail. It's well 
suited for arithmetic operations, but less suitable for logical 
operations. For example, this code can't overflow, but {min, max} 
range propagation thinks it can.


ubyte foo ( uint a) {
  return (a  0x8081)  0x0FFF;
}

For these types of expressions, {known_one_bits, known_zero_bits} 
works better.
Now, you can track both types of range propagation 
simultaneously, and I think we probably should improve our 
implementation in that way. It would improve the accuracy in many 
cases.


Question: If we had implemented that already, would you still 
want the interface you're proposing here?




Re: ANTLR grammar for D?

2014-06-20 Thread Brian Schott via Digitalmars-d

On Friday, 20 June 2014 at 06:57:31 UTC, Wesley Hamilton wrote:
I've started making a D grammar for ANTLR4, but I didn't want 
to spend days testing and debugging it later if somebody 
already has one.


The best search results turn up posts that are 10 years old. 
Only one post has a link to a grammar file and the page seems 
to have been removed. I also assume it would be obsolete with 
changes to ANTLR and D.

http://www.digitalmars.com/d/archives/digitalmars/D/25302.html
http://www.digitalmars.com/d/archives/digitalmars/D/4953.html


https://github.com/Hackerpilot/DGrammar/blob/master/D.g4

It works around a few problems in ANTLR by combining a bunch of 
rules that should be separate into the unaryExpression rule, but 
it does build and produce a parse tree now. (I have no idea if 
the parse trees are always correct)


Re: ANTLR grammar for D?

2014-06-20 Thread Wesley Hamilton via Digitalmars-d

On Friday, 20 June 2014 at 07:47:44 UTC, dennis luehring wrote:

Am 20.06.2014 08:57, schrieb Wesley Hamilton:
I've started making a D grammar for ANTLR4, but I didn't want 
to
spend days testing and debugging it later if somebody already 
has

one.

The best search results turn up posts that are 10 years old. 
Only
one post has a link to a grammar file and the page seems to 
have
been removed. I also assume it would be obsolete with changes 
to

ANTLR and D.
http://www.digitalmars.com/d/archives/digitalmars/D/25302.html
http://www.digitalmars.com/d/archives/digitalmars/D/4953.html



most uptodate seems to be 
https://github.com/Hackerpilot/DGrammar


Thanks. Just realized that the add grammar button for ANTLR 
grammar list is broken... so that could be why it's not there. 
I'll probably still finish the grammar I'm making since I'm 75% 
done. That's a great reference, though. I think it's missing a 
few minor details like delimited strings, token strings, and 
assembly keywords.


It should help where the Language Reference pages aren't 
accurate. For example, I think HexLetter is incorrectly defined.


Re: Icons for .d and .di files

2014-06-20 Thread FreeSlave via Digitalmars-d

On Friday, 20 June 2014 at 05:34:06 UTC, Suliman wrote:

http:///dynamic.dlang.ru/Files/2014/Dlang_logos.png


Thanks, but they are still logos, not icons for files. File icon 
should appear as document. Like this 
http://th04.deviantart.net/fs70/200H/f/2012/037/1/a/c___programming_language_dock_icon_by_timsmanter-d4ougsk.png



Brad Anderson, yes, that what I use currently.


Re: nothrow function callbacks in extern(C) code - solution

2014-06-20 Thread Rainer Schuetze via Digitalmars-d



On 19.06.2014 21:59, Walter Bright wrote:

With nothrow and @nogc annotations, we've been motivated to add these
annotations to C system API functions, because obviously such functions
aren't going to throw D exceptions or call the D garbage collector.

But this exposed a problem - functions like C's qsort() take a pointer
to a callback function. The callback function, being supplied by the D
programmer, may throw and may call the garbage collector. By requiring
the callback function to be also nothrow @nogc, this is an unreasonable
requirement besides breaking most existing D code that uses qsort().

This problem applies as well to the Windows APIs and the Posix APIs with
callbacks.

The solution is to use overloading so that if your callback is nothrow,
it will call the nothrow version of qsort, if it is throwable, it calls
the throwable version of qsort.

Never mind that those two versions of qsort are actually the same
function (!), even though D's type system regards them as different.
Although this looks like an usafe hack, it actually is quite safe,
presuming that the rest of the qsort code itself does not throw. This
technique relies on the fact that extern(C) functions do not get their
types mangled into the names.

Some example code:

   extern (C) { alias int function() fp_t; }
   extern (C) nothrow { alias int function() fpnothrow_t; }

   extern (C) int foo(int a, fp_t fp);
   extern (C) nothrow int foo(int a, fpnothrow_t fp);

   extern (C) int bar();
   extern (C) nothrow int barnothrow();

   void test() {
 foo(1, bar); // calls the 'throwing' foo()
 foo(1, barnothrow);  // calls the 'nothrow' foo()
   }


This only works for those functions that call the callback function 
directly.


OS function do not always work this way. They register callbacks for 
later use like a windows procedure or a signal handler.


This causes innocent looking functions to not behave as annotated 
because they internally use the callback functions. E.g. a lot of the 
Windows API functions might use message sending/dispatching internally, 
which might execute both throwing or GC allocating callbacks. These are 
currently not meeting the promise of their annotations.


We either have to be more conservative with annotating OS functions or 
relax the guarantees of nothrow or @nogc. Both alternatives are not very 
compelling.


Re: nothrow function callbacks in extern(C) code - solution

2014-06-20 Thread Paolo Invernizzi via Digitalmars-d

On Thursday, 19 June 2014 at 19:58:58 UTC, Walter Bright wrote:

snip

The callback function, being supplied by the D programmer, may 
throw and may call the garbage collector. By requiring the 
callback function to be also nothrow @nogc, this is an 
unreasonable requirement besides breaking most existing D code 
that uses qsort().


d.learn

I'm missing something, as I'm annotating all my C/API/etc 
callback function with nothrow: when the callback throws, what 
happens?


I was thinking that this will mess-up the stack once the unwind 
will proceed...

What's the use-case for having such a callback 'throwable'?

Thanks!

/d.learn

---
Paolo


Re: nothrow function callbacks in extern(C) code - solution

2014-06-20 Thread w0rp via Digitalmars-d

On Friday, 20 June 2014 at 11:07:48 UTC, Paolo Invernizzi wrote:

On Thursday, 19 June 2014 at 19:58:58 UTC, Walter Bright wrote:

snip

The callback function, being supplied by the D programmer, may 
throw and may call the garbage collector. By requiring the 
callback function to be also nothrow @nogc, this is an 
unreasonable requirement besides breaking most existing D code 
that uses qsort().


d.learn

I'm missing something, as I'm annotating all my C/API/etc 
callback function with nothrow: when the callback throws, what 
happens?


I was thinking that this will mess-up the stack once the unwind 
will proceed...

What's the use-case for having such a callback 'throwable'?

Thanks!

/d.learn

---
Paolo


This is actually a really good point. How can a callback in C 
code expect to throw exceptions? Surely it should be nothrow 
anyway, because it's just not going to work otherwise. Maybe we 
should just strengthen the constraints for that, and make people 
update their code which isn't likely to work anyway. You can make 
any throwing function nothrow by catching Exceptions and throwing 
Errors instead at least. Ideally you wouldn't even throw Errors 
in C callbacks.


Re: Can't debug dmd binary

2014-06-20 Thread Dicebot via Digitalmars-d
'main` is C main function inside druntime. Your program entry 
point is _Dmain. You may also want to try git master built of gdb 
with Iain Buclaw patches for enhanced D support - those are 
awesome beyond imagination and, among other things, add support 
for D symbol (de)mangling :)


Re: D Logos

2014-06-20 Thread Guillaume Chatelet via Digitalmars-d

Please vote.
https://docs.google.com/forms/d/1eL0AgKvoLyd9DVpzwG-mm2Uk82WdCjDrWSBP3MZpEFY/viewform?usp=send_form

Only 5 answers for now.
https://docs.google.com/forms/d/1eL0AgKvoLyd9DVpzwG-mm2Uk82WdCjDrWSBP3MZpEFY/viewanalytics


Re: Can't debug dmd binary

2014-06-20 Thread Lionello Lunesu via Digitalmars-d

On 20/06/14 11:00, Jerry wrote:

Hi folks,

I'm unable to debug binaries built with dmd 2.065.  The platform is
x86-64 Ubuntu 14.04.  This is gdb 7.7.

If I have a simple program:

nodebug.d:

void main() {
   int i;
   i = 3;
}

dmd -g nodebug.d

jlquinn@wyvern:~/d$ gdb nodebug
GNU gdb (Ubuntu 7.7-0ubuntu3.1) 7.7
Copyright (C) 2014 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later http://gnu.org/licenses/gpl.html
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type show copying
and show warranty for details.
This GDB was configured as x86_64-linux-gnu.
Type show configuration for configuration details.
For bug reporting instructions, please see:
http://www.gnu.org/software/gdb/bugs/.
Find the GDB manual and other documentation resources online at:
http://www.gnu.org/software/gdb/documentation/.
For help, type help.
Type apropos word to search for commands related to word...
Reading symbols from nodebug...done.
(gdb) b main
Breakpoint 1 at 0x416ecc
(gdb) run
Starting program: /home/jlquinn/d/nodebug
[Thread debugging using libthread_db enabled]
Using host libthread_db library /lib/x86_64-linux-gnu/libthread_db.so.1.

Breakpoint 1, 0x00416ecc in main ()
(gdb) l
1   dl-debug.c: No such file or directory.
(gdb)


Using dmd -gc doesn't help at all.  Any suggestions?

Thanks
Jerry



$gdb nodebug
(gdb) b _Dmain
(gdb) r





Re: RFC: Value range propagation for if-else

2014-06-20 Thread Lionello Lunesu via Digitalmars-d

On 20/06/14 15:53, Don wrote:

On Wednesday, 18 June 2014 at 06:40:21 UTC, Lionello Lunesu wrote:

Hi,



https://github.com/lionello/dmd/compare/if-else-range

There, I've also added a __traits(intrange, expression) which
returns a tuple with the min and max for the given expression.



Destroy?


The compiler uses value range propagation in this {min, max} form, but I
think that's an implementation detail. It's well suited for arithmetic
operations, but less suitable for logical operations. For example, this
code can't overflow, but {min, max} range propagation thinks it can.

ubyte foo ( uint a) {
   return (a  0x8081)  0x0FFF;
}

For these types of expressions, {known_one_bits, known_zero_bits} works
better.
Now, you can track both types of range propagation simultaneously, and I
think we probably should improve our implementation in that way. It
would improve the accuracy in many cases.

Question: If we had implemented that already, would you still want the
interface you're proposing here?



You could have different __traits in that case:
__traits(valueRange,...) // for min/max
__traits(bitRange,...) // mask

You example seems rather artificial though. IRL you'd get a compiler 
warning/error and could fix it by changing the code to  0xFF. I 
personally have not yet had the need for these bit-masks.


L.


Re: Tail pad optimization, cache friendlyness and C++ interrop

2014-06-20 Thread David Nadlinger via Digitalmars-d

On Friday, 20 June 2014 at 00:15:36 UTC, Walter Bright wrote:

On 6/19/2014 12:59 PM, Joakim wrote:
I don't know enough about these copyright tainting concerns to 
say if it's a
good idea, just pointing out that he was talking about the 
backend, not the

frontend.


Ok, but I'll also point out that LDC and GDC are fully free  
open source.


Which wouldn't really help Artur (whether his concerns are 
justified or not), as we usually tell people to contribute their 
frontend patches directly to the upstream DMD repository.


David


Re: nothrow function callbacks in extern(C) code - solution

2014-06-20 Thread Artur Skawina via Digitalmars-d
On 06/20/14 01:39, H. S. Teoh via Digitalmars-d wrote:
 On Fri, Jun 20, 2014 at 12:36:53AM +0200, Timon Gehr via Digitalmars-d wrote:
 On 06/19/2014 10:29 PM, Dicebot wrote:
 I have always wondered why `inout` is limited to const when problem
 is almost identical with all other restrictive attributes.

 I have furthermore always wondered why there can always only be one
 `inout' wildcard in scope. This is not the best existing way to solve

 T foo![T : const(int)[]](T arg){ return arg; }

 this can be extended to other attributes, for example in the following way
 (this is just an example):

 void evaluate![transitive_attributes a](void delegate()@a dg)@a{
 dg();
 }

 What if there are multiple delegate arguments?

 What if the delegate arguments themselves take delegate arguments?

 Pretty soon, we need an attribute algebra to express these complicated
 relationships.

Simple propagation, from just one source, would be enough for almost all
cases - there's no need to over-complicate this. The remaining cases could
be handled via introspection and ctfe; this way allows for more options,
not just using some pre-defined algebra subset, which happens to be
supported by a particular compiler (-version).

 It would be nice to have a solution that can handle all of these cases
 without exploding complexity in the syntax.

Actually supporting parametrized attributes, is something that I think
everybody agrees on in principle (hence the lack of discussions when
this topic is mentioned, every few weeks or so). The required semantics
are pretty clear; what I still haven't seen is a good enough syntax
proposal. One syntax that might have worked for built-in attributes 
could have been const!A etc, but I'm not sure if the parameter
inference would be intuitive enough, and it would appear, at least
superficially, to  potentially clash with user defined attributes,
especially once those become more powerful.

artur


Re: Tail pad optimization, cache friendlyness and C++ interrop

2014-06-20 Thread Artur Skawina via Digitalmars-d
On 06/20/14 13:51, David Nadlinger via Digitalmars-d wrote:
 On Friday, 20 June 2014 at 00:15:36 UTC, Walter Bright wrote:
 On 6/19/2014 12:59 PM, Joakim wrote:
 I don't know enough about these copyright tainting concerns to say if it's a
 good idea, just pointing out that he was talking about the backend, not the
 frontend.

 Ok, but I'll also point out that LDC and GDC are fully free  open source.
 
 Which wouldn't really help Artur (whether his concerns are justified or not), 
 as we usually tell people to contribute their frontend patches directly to 
 the upstream DMD repository.

Yes. Also, like I've already said, working on top of a downstream tree
would eventually either result in a fork, or fail, with the latter
being the much more likely result.

I'll just add that I now think I've overstated the gains that splitting
out the free frontend would bring. That's because I've since realized
how hard it would still be to deal with development on top of git head,
when one can not immediately test the result /on real code/ (ie using a
non-dmd backend).
Without a truly shared frontend, there's no good solution, I guess. :(

artur


Perlin noise benchmark speed

2014-06-20 Thread Nick Treleaven via Digitalmars-d

Hi,
A Perlin noise benchmark was quoted in this reddit thread:

http://www.reddit.com/r/rust/comments/289enx/c0de517e_where_is_my_c_replacement/cibn6sr

It apparently shows the 3 main D compilers producing slower code than 
Go, Rust, gcc, clang, Nimrod:


https://github.com/nsf/pnoise#readme

I initially wondered about std.random, but got this response:

Yeah, but std.random is not used in that benchmark, it just initializes 
256 random vectors and permutates 256 sequential integers. What spins in 
a loop is just plain FP math and array read/writes. I'm sure it can be 
done faster, maybe D compilers are bad at automatic inlining or something. 


Obviously this is only one person's benchmark, but I wondered if people 
would like to check their code and suggest reasons for the speed deficit.


Re: ANTLR grammar for D?

2014-06-20 Thread Artur Skawina via Digitalmars-d
On 06/20/14 11:22, Wesley Hamilton via Digitalmars-d wrote:
 It should help where the Language Reference pages aren't accurate. For 
 example, I think HexLetter is incorrectly defined.

What's the problem with HexLetter?

Once upon the time I did play with parsing D, unfortunately the
compiler situation has indirectly resulted in a year+ long pause,
as I was (maybe naively) hoping to be able to finish the project
using an at least semi-modern D dialect, so that it would be
usable for not just me... At least the lexer was done by then, and
I think I fixed most of the dlang problems during the conversion to
PEG. It's still available, maybe is has some value as an additional
reference:

http://repo.or.cz/w/girtod.git/blob/refs/heads/lexer:/dlanglexer.d

At least back when I did that, the dlang.org docs had quite a few
problems; some have probably been fixed since.

artur


Re: Perlin noise benchmark speed

2014-06-20 Thread Nick Treleaven via Digitalmars-d

On 20/06/2014 13:32, Nick Treleaven wrote:

It apparently shows the 3 main D compilers producing slower code than
Go, Rust, gcc, clang, Nimrod:


Also, it does appear to be using the correct compiler flags (at least 
for dmd):

https://github.com/nsf/pnoise/blob/master/compile.bash


Re: Adding the ?. null verification

2014-06-20 Thread Etienne via Digitalmars-d

On 2014-06-19 6:23 PM, H. S. Teoh via Digitalmars-d wrote:


Unfortunately, it appears that opDispatch has become too complex to be
inlined, so now gdc is unable to simplify it to a series of nested if's.
:-(


T



Meh, I don't mind specifying that condition manually after all... having 
a default value isn't really on top of my list =)


Re: Perlin noise benchmark speed

2014-06-20 Thread David Nadlinger via Digitalmars-d

On Friday, 20 June 2014 at 12:34:55 UTC, Nick Treleaven wrote:

On 20/06/2014 13:32, Nick Treleaven wrote:
It apparently shows the 3 main D compilers producing slower 
code than

Go, Rust, gcc, clang, Nimrod:


Also, it does appear to be using the correct compiler flags (at 
least for dmd):

https://github.com/nsf/pnoise/blob/master/compile.bash


-release is missing, although that probably isn't playing a big 
role here.


Another minor issues is that Noise2DContext isn't final, making 
the calls to get virtual.


This should cause such a big difference though. Hopefully 
somebody can investigate this more closely.


David


Re: Perlin noise benchmark speed

2014-06-20 Thread dennis luehring via Digitalmars-d

Am 20.06.2014 14:32, schrieb Nick Treleaven:

Hi,
A Perlin noise benchmark was quoted in this reddit thread:

http://www.reddit.com/r/rust/comments/289enx/c0de517e_where_is_my_c_replacement/cibn6sr

It apparently shows the 3 main D compilers producing slower code than
Go, Rust, gcc, clang, Nimrod:

https://github.com/nsf/pnoise#readme

I initially wondered about std.random, but got this response:

Yeah, but std.random is not used in that benchmark, it just initializes
256 random vectors and permutates 256 sequential integers. What spins in
a loop is just plain FP math and array read/writes. I'm sure it can be
done faster, maybe D compilers are bad at automatic inlining or something. 

Obviously this is only one person's benchmark, but I wondered if people
would like to check their code and suggest reasons for the speed deficit.



write, printf etc. performance is benchmarked also - so not clear
if pnoise is super-fast but write is super-slow etc...


Re: Adding the ?. null verification

2014-06-20 Thread Etienne via Digitalmars-d

On 2014-06-19 6:30 PM, H. S. Teoh via Digitalmars-d wrote:

On Thu, Jun 19, 2014 at 03:23:33PM -0700, H. S. Teoh via Digitalmars-d wrote:
[...]

Unfortunately, it appears that opDispatch has become too complex to be
inlined, so now gdc is unable to simplify it to a series of nested
if's.  :-(

[...]

Surprisingly, if we just stick .exists in there unconditionally, like
you did, then gdc actually optimizes it away completely, so that we're
back to the equivalent of nested if's! So your solution is superior
after all.  :)


T



Oh I just saw this. Good, so I can keep my .or() method ! :)


Re: Perlin noise benchmark speed

2014-06-20 Thread MrSmith via Digitalmars-d

On Friday, 20 June 2014 at 12:56:46 UTC, David Nadlinger wrote:

On Friday, 20 June 2014 at 12:34:55 UTC, Nick Treleaven wrote:

On 20/06/2014 13:32, Nick Treleaven wrote:
It apparently shows the 3 main D compilers producing slower 
code than

Go, Rust, gcc, clang, Nimrod:


Also, it does appear to be using the correct compiler flags 
(at least for dmd):

https://github.com/nsf/pnoise/blob/master/compile.bash


-release is missing, although that probably isn't playing a big 
role here.


Another minor issues is that Noise2DContext isn't final, making 
the calls to get virtual.


This should cause such a big difference though. Hopefully 
somebody can investigate this more closely.


David


struct can be used instead of class


Re: Perlin noise benchmark speed

2014-06-20 Thread Robert Schadek via Digitalmars-d
On 06/20/2014 02:34 PM, Nick Treleaven via Digitalmars-d wrote:
 On 20/06/2014 13:32, Nick Treleaven wrote:
 It apparently shows the 3 main D compilers producing slower code than
 Go, Rust, gcc, clang, Nimrod:

 Also, it does appear to be using the correct compiler flags (at least
 for dmd):
 https://github.com/nsf/pnoise/blob/master/compile.bash
I added some final pure @safe stuff


Re: Perlin noise benchmark speed

2014-06-20 Thread dennis luehring via Digitalmars-d

Am 20.06.2014 15:14, schrieb dennis luehring:

Am 20.06.2014 14:32, schrieb Nick Treleaven:

Hi,
A Perlin noise benchmark was quoted in this reddit thread:

http://www.reddit.com/r/rust/comments/289enx/c0de517e_where_is_my_c_replacement/cibn6sr

It apparently shows the 3 main D compilers producing slower code than
Go, Rust, gcc, clang, Nimrod:

https://github.com/nsf/pnoise#readme

I initially wondered about std.random, but got this response:

Yeah, but std.random is not used in that benchmark, it just initializes
256 random vectors and permutates 256 sequential integers. What spins in
a loop is just plain FP math and array read/writes. I'm sure it can be
done faster, maybe D compilers are bad at automatic inlining or something. 

Obviously this is only one person's benchmark, but I wondered if people
would like to check their code and suggest reasons for the speed deficit.



write, printf etc. performance is benchmarked also - so not clear
if pnoise is super-fast but write is super-slow etc...



using perf with 10 is maybe too small to give good avarge result infos
and also runtime startup etc. is measured - it not clear what is slower

these benchmarks should be seperated into 3 parts

runtime-startup
pure pnoise
result output - needed only once for verification, return dummy output 
will fit better to test the pnoise speed


are array bounds checks active?


Re: Perlin noise benchmark speed

2014-06-20 Thread Robert Schadek via Digitalmars-d
On 06/20/2014 02:56 PM, David Nadlinger via Digitalmars-d wrote:
 On Friday, 20 June 2014 at 12:34:55 UTC, Nick Treleaven wrote:
 On 20/06/2014 13:32, Nick Treleaven wrote:
 It apparently shows the 3 main D compilers producing slower code than
 Go, Rust, gcc, clang, Nimrod:

 Also, it does appear to be using the correct compiler flags (at least
 for dmd):
 https://github.com/nsf/pnoise/blob/master/compile.bash

 -release is missing, although that probably isn't playing a big role
 here.

 Another minor issues is that Noise2DContext isn't final, making the
 calls to get virtual.

 This should cause such a big difference though. Hopefully somebody can
 investigate this more closely.

 David
I converted Noise2DContext into a struct, I gone add some more to my patch


Re: Perlin noise benchmark speed

2014-06-20 Thread Mattcoder via Digitalmars-d

On Friday, 20 June 2014 at 13:14:04 UTC, dennis luehring wrote:
write, printf etc. performance is benchmarked also - so not 
clear

if pnoise is super-fast but write is super-slow etc...


Indeed and using Windows (At least 8), the size of command-window 
(CMD) interferes in the result drastically... for example: 
running this test with console maximized will take: 2.58s while 
the same test but in small window: 2.11s!


Matheus.



Re: Adding the ?. null verification

2014-06-20 Thread Bienlein via Digitalmars-d

On Wednesday, 18 June 2014 at 15:57:40 UTC, Etienne wrote:

On 2014-06-18 11:55 AM, bearophile wrote:

Etienne:


writeln(obj.member?.nested?.val);


What about an approach like Scala instead?

Bye,
bearophile


You mean like this?


http://stackoverflow.com/questions/1163393/best-scala-imitation-of-groovys-safe-dereference-operator

def ?[A](block: = A) =
  try { block } catch {
case e: NullPointerException if 
e.getStackTrace()(2).getMethodName == $qmark = null

case e = throw e
  }

val a = ?(b.c.d.e)


I think he means to use the Option class instead of returning 
null. Also Rust does it that way.


Re: Adding the ?. null verification

2014-06-20 Thread H. S. Teoh via Digitalmars-d
On Fri, Jun 20, 2014 at 08:57:46AM -0400, Etienne via Digitalmars-d wrote:
 On 2014-06-19 6:23 PM, H. S. Teoh via Digitalmars-d wrote:
 
 Unfortunately, it appears that opDispatch has become too complex to
 be inlined, so now gdc is unable to simplify it to a series of nested
 if's.
 :-(
 
 
 T
 
 
 Meh, I don't mind specifying that condition manually after all...
 having a default value isn't really on top of my list =)

True. Actually, I did my disassembly test again, and now I can't seem to
coax gdc to optimize out the .exists flag, esp. when .or is involved.
Perhaps that was a little too ambitious; maybe it's better to stick with
the original simple solution after all. :P


T

-- 
Laissez-faire is a French term commonly interpreted by Conservatives to mean 
'lazy fairy,' which is the belief that if governments are lazy enough, the Good 
Fairy will come down from heaven and do all their work for them.


Re: Perlin noise benchmark speed

2014-06-20 Thread David Nadlinger via Digitalmars-d
On Friday, 20 June 2014 at 13:20:16 UTC, Robert Schadek via 
Digitalmars-d wrote:

I added some final pure @safe stuff


Thanks. As a general comment, I'd be careful with suggesting the 
use of pure/@safe/… for performance improvements in 
microbenchmarks. While it is certainly good D style to use them 
wherever possible, it might lead people less familiar with D to 
believe that fast D code needs a lot of annotations.


David


Re: Perlin noise benchmark speed

2014-06-20 Thread David Nadlinger via Digitalmars-d

On Friday, 20 June 2014 at 13:46:26 UTC, Mattcoder wrote:

On Friday, 20 June 2014 at 13:14:04 UTC, dennis luehring wrote:
write, printf etc. performance is benchmarked also - so not 
clear

if pnoise is super-fast but write is super-slow etc...


Indeed and using Windows (At least 8), the size of 
command-window (CMD) interferes in the result drastically... 
for example: running this test with console maximized will 
take: 2.58s while the same test but in small window: 2.11s!


Before I wrote the above, I briefly ran the benchmark on my local 
(OS X) machine, and verified that the bulk of the time is indeed 
spent in the noise calculation loop (with stdout piped into 
/dev/null). Still, the LDC-compiled code is only about half as 
fast as the Clang-compiled version, and there is no good reason 
why it should be.


My new guess is a difference in inlining heuristics (note also 
that the Rust version uses inlining hints). The big difference 
between GCC and Clang might be a hint that the performance drop 
is caused by a rather minute difference in optimizer tuning.


Thus, we really need somebody to sit down with a 
profiler/disassembler and figure out what is going on.


David


Re: Set-up timeouts on thread-related unittests

2014-06-20 Thread Steven Schveighoffer via Digitalmars-d
On Fri, 20 Jun 2014 03:13:23 -0400, Iain Buclaw ibuc...@gdcproject.org  
wrote:



Hi,

I've been seeing a problem on the Debian X32 build system where unittest  
process just hangs, and require manual intervention by the poor  
maintainer to kill the process manually before the build fails due to  
inactivity.


Haven't yet managed to reduce the problem (it only happens on a native  
X32 system, but not when running X32 under native x86_64), but thought  
it would be a good idea to suggest that any thread related tests should  
be safely handled by self terminating after a period of waiting.


Thoughts from the phobos maintainers?


This could probably be implemented quite simply in druntime.

I'd be hesitant to make it default, but it would be nice to tag unit tests  
as having a maximum timeout. Yet another case for using attributes on unit  
tests and RTInfo for modules...


-Steve


Re: Perlin noise benchmark speed

2014-06-20 Thread Ary Borenszweig via Digitalmars-d

On 6/20/14, 9:32 AM, Nick Treleaven wrote:

Hi,
A Perlin noise benchmark was quoted in this reddit thread:

http://www.reddit.com/r/rust/comments/289enx/c0de517e_where_is_my_c_replacement/cibn6sr


It apparently shows the 3 main D compilers producing slower code than
Go, Rust, gcc, clang, Nimrod:

https://github.com/nsf/pnoise#readme

I initially wondered about std.random, but got this response:

Yeah, but std.random is not used in that benchmark, it just initializes
256 random vectors and permutates 256 sequential integers. What spins in
a loop is just plain FP math and array read/writes. I'm sure it can be
done faster, maybe D compilers are bad at automatic inlining or
something. 

Obviously this is only one person's benchmark, but I wondered if people
would like to check their code and suggest reasons for the speed deficit.


I just tried it with ldc and it's faster (faster than Go, slower than 
Ni. But this is still slower than other languages. And other languages 
keep the array bounds check on...


Re: Perlin noise benchmark speed

2014-06-20 Thread bearophile via Digitalmars-d

Nick Treleaven:


A Perlin noise benchmark was quoted in this reddit thread:

http://www.reddit.com/r/rust/comments/289enx/c0de517e_where_is_my_c_replacement/cibn6sr


This should be compiled with LDC2, it's more idiomatic and a 
little faster than the original D version:

http://dpaste.dzfl.pl/8d2ff04b62d3

I have already seen that if I inline Noise2DContext.get in the 
main manually the program gets faster (but not yet fast enough).


Bye,
bearophile


Re: Perlin noise benchmark speed

2014-06-20 Thread bearophile via Digitalmars-d

http://dpaste.dzfl.pl/8d2ff04b62d3


Sorry for the awful tabs.

Bye,
bearophile


Re: Perlin noise benchmark speed

2014-06-20 Thread bearophile via Digitalmars-d
If I add this import in Noise2DContext.getGradients the run-time 
decreases a lot (I am now just two times slower than gcc with 
-Ofast):


import core.stdc.math: floor;

Bye,
bearophile


Re: Perlin noise benchmark speed

2014-06-20 Thread whassup via Digitalmars-d

 GO BEAROPHILE YOU CAN DO IT

On Friday, 20 June 2014 at 15:24:38 UTC, bearophile wrote:
If I add this import in Noise2DContext.getGradients the 
run-time decreases a lot (I am now just two times slower than 
gcc with -Ofast):


import core.stdc.math: floor;

Bye,
bearophile




Re: Perlin noise benchmark speed

2014-06-20 Thread JR via Digitalmars-d

On Friday, 20 June 2014 at 15:24:38 UTC, bearophile wrote:
If I add this import in Noise2DContext.getGradients the 
run-time decreases a lot (I am now just two times slower than 
gcc with -Ofast):


import core.stdc.math: floor;

Bye,
bearophile


Was just about to post that if I cheat and replace usage of 
floor(x) with cast(float)cast(int)x, ldc2 is almost down to gcc 
speeds (119.6ms average over 100 full executions vs gcc 102.7ms).


It stood out in the callgraph. Because profiling before 
optimizing.


Re: Icons for .d and .di files

2014-06-20 Thread Jordi Sayol via Digitalmars-d
El 20/06/14 11:49, FreeSlave via Digitalmars-d ha escrit:
 Thanks, but they are still logos, not icons for files. File icon should 
 appear as document. Like this 
 http://th04.deviantart.net/fs70/200H/f/2012/037/1/a/c___programming_language_dock_icon_by_timsmanter-d4ougsk.png


http://s28.postimg.org/d4hqy7hv1/dsrc1.png
http://s28.postimg.org/kyicjlpnx/dsrc2.png
http://s28.postimg.org/4os6gpezx/dsrc3.png

-- 
Jordi Sayol


Re: Perlin noise benchmark speed

2014-06-20 Thread bearophile via Digitalmars-d

So this is the best so far version:

http://dpaste.dzfl.pl/8dae9b359f27

I don't show the version with manually inlined function.

(I have also seen that GCC generates on my cpu a little faster 
code if I don't use sse registers.)


Bye,
bearophile


Re: ANTLR grammar for D?

2014-06-20 Thread Wesley Hamilton via Digitalmars-d
On Friday, 20 June 2014 at 12:35:26 UTC, Artur Skawina via 
Digitalmars-d wrote:

On 06/20/14 11:22, Wesley Hamilton via Digitalmars-d wrote:
It should help where the Language Reference pages aren't 
accurate. For example, I think HexLetter is incorrectly 
defined.


What's the problem with HexLetter?

Once upon the time I did play with parsing D, unfortunately the
compiler situation has indirectly resulted in a year+ long 
pause,

as I was (maybe naively) hoping to be able to finish the project
using an at least semi-modern D dialect, so that it would be
usable for not just me... At least the lexer was done by then, 
and
I think I fixed most of the dlang problems during the 
conversion to
PEG. It's still available, maybe is has some value as an 
additional

reference:

http://repo.or.cz/w/girtod.git/blob/refs/heads/lexer:/dlanglexer.d

At least back when I did that, the dlang.org docs had quite a 
few

problems; some have probably been fixed since.

artur


http://dlang.org/lex
Maybe I'm blind but HexLetter includes an underscore. HexDigitsUS 
isn't defined. Based on this I figure there's a possibility of 
another error. I realize that there's an improve this page 
button... maybe I'll get around to testing the compiler with this 
theory.


The Magic Forest discussion

2014-06-20 Thread Justin Whear via Digitalmars-d
An interesting post: http://www.reddit.com/r/programming/comments/28mp3m/
the_magic_forest_problem_revisited_optimising/

The author uses D as a sort of lingua franca to examine various solutions 
to a classic problem.


Re: Icons for .d and .di files

2014-06-20 Thread Jordi Sayol via Digitalmars-d
El 20/06/14 18:02, Jordi Sayol via Digitalmars-d ha escrit:
 El 20/06/14 11:49, FreeSlave via Digitalmars-d ha escrit:
 Thanks, but they are still logos, not icons for files. File icon should 
 appear as document. Like this 
 http://th04.deviantart.net/fs70/200H/f/2012/037/1/a/c___programming_language_dock_icon_by_timsmanter-d4ougsk.png
 
 
 http://s28.postimg.org/d4hqy7hv1/dsrc1.png
 http://s28.postimg.org/kyicjlpnx/dsrc2.png
 http://s28.postimg.org/4os6gpezx/dsrc3.png
 

A bigger one

http://s7.postimg.org/cmwclyxh7/dsrc4.png

-- 
Jordi Sayol


Re: Thanks for the bounty!

2014-06-20 Thread John via Digitalmars-d
On Thursday, 19 June 2014 at 22:27:58 UTC, Andrej Mitrovic via 
Digitalmars-d wrote:
I claimed a bounty recently, and I just wanted to say thanks to 
Andrei and

his company for backing the bounty.

I won't be able to take any future bounties from Facebook due 
to internal
competition policies, but that's ok as I'm now a paid 
programmer anyway. :)


It was fun to win something while coding! Cheers!



Congratulations!


Re: Icons for .d and .di files

2014-06-20 Thread Meta via Digitalmars-d
On Friday, 20 June 2014 at 16:41:14 UTC, Jordi Sayol via 
Digitalmars-d wrote:

El 20/06/14 18:02, Jordi Sayol via Digitalmars-d ha escrit:

El 20/06/14 11:49, FreeSlave via Digitalmars-d ha escrit:
Thanks, but they are still logos, not icons for files. File 
icon should appear as document. Like this 
http://th04.deviantart.net/fs70/200H/f/2012/037/1/a/c___programming_language_dock_icon_by_timsmanter-d4ougsk.png



http://s28.postimg.org/d4hqy7hv1/dsrc1.png
http://s28.postimg.org/kyicjlpnx/dsrc2.png
http://s28.postimg.org/4os6gpezx/dsrc3.png



A bigger one

http://s7.postimg.org/cmwclyxh7/dsrc4.png


I think the third one is best in this case. You don't want a 
really detailed logo for a file icon.


Re: Icons for .d and .di files

2014-06-20 Thread MrSmith via Digitalmars-d

On Friday, 20 June 2014 at 16:52:55 UTC, Meta wrote:
On Friday, 20 June 2014 at 16:41:14 UTC, Jordi Sayol via 
Digitalmars-d wrote:

El 20/06/14 18:02, Jordi Sayol via Digitalmars-d ha escrit:

El 20/06/14 11:49, FreeSlave via Digitalmars-d ha escrit:
Thanks, but they are still logos, not icons for files. File 
icon should appear as document. Like this 
http://th04.deviantart.net/fs70/200H/f/2012/037/1/a/c___programming_language_dock_icon_by_timsmanter-d4ougsk.png



http://s28.postimg.org/d4hqy7hv1/dsrc1.png
http://s28.postimg.org/kyicjlpnx/dsrc2.png
http://s28.postimg.org/4os6gpezx/dsrc3.png



A bigger one

http://s7.postimg.org/cmwclyxh7/dsrc4.png


I think the third one is best in this case. You don't want a 
really detailed logo for a file icon.


+1


Re: Adding the ?. null verification

2014-06-20 Thread Etienne via Digitalmars-d

On 2014-06-20 10:29 AM, H. S. Teoh via Digitalmars-d wrote:

True. Actually, I did my disassembly test again, and now I can't seem to
coax gdc to optimize out the .exists flag, esp. when .or is involved.
Perhaps that was a little too ambitious; maybe it's better to stick with
the original simple solution after all. :P


T



Try marking the or method as const, and the bool as immutable maybe?


Re: Set-up timeouts on thread-related unittests

2014-06-20 Thread Iain Buclaw via Digitalmars-d
On 20 Jun 2014 16:00, Steven Schveighoffer via Digitalmars-d 
digitalmars-d@puremagic.com wrote:

 On Fri, 20 Jun 2014 03:13:23 -0400, Iain Buclaw ibuc...@gdcproject.org
wrote:

 Hi,

 I've been seeing a problem on the Debian X32 build system where unittest
process just hangs, and require manual intervention by the poor maintainer
to kill the process manually before the build fails due to inactivity.

 Haven't yet managed to reduce the problem (it only happens on a native
X32 system, but not when running X32 under native x86_64), but thought it
would be a good idea to suggest that any thread related tests should be
safely handled by self terminating after a period of waiting.

 Thoughts from the phobos maintainers?


 This could probably be implemented quite simply in druntime.

 I'd be hesitant to make it default, but it would be nice to tag unit
tests as having a maximum timeout. Yet another case for using attributes on
unit tests and RTInfo for modules...


I don't see a problem using it as default for these.

1) I assume there is already a timeout for the TCP tests.

2) If the test runs a shortlived (ie: increments some global value)
function in 100 parallel threads, the maintainer of the module who wrote
that test should safely be able to assume that it shouldn't take more than
60 seconds to execute.


Re: Tail pad optimization, cache friendlyness and C++ interrop

2014-06-20 Thread Walter Bright via Digitalmars-d

On 6/20/2014 5:13 AM, Artur Skawina via Digitalmars-d wrote:

On 06/20/14 13:51, David Nadlinger via Digitalmars-d wrote:

Which wouldn't really help Artur (whether his concerns are justified or
not), as we usually tell people to contribute their frontend patches
directly to the upstream DMD repository.


Yes. Also, like I've already said, working on top of a downstream tree would
eventually either result in a fork, or fail, with the latter being the much
more likely result.

I'll just add that I now think I've overstated the gains that splitting out
the free frontend would bring. That's because I've since realized how hard it
would still be to deal with development on top of git head, when one can not
immediately test the result /on real code/ (ie using a non-dmd backend).
Without a truly shared frontend, there's no good solution, I guess. :(


Just install dmd on your machine and contribute to the front end that way. Just 
because the back end is not what you want, it isn't going to corrupt your free 
open source contributions in the slightest, and those changes will make their 
way into LDC and GDC.


Or you can contribute to the repositories for GDC and LDC directly, and then 
issue PR's for them into the main front end github.





Re: Set-up timeouts on thread-related unittests

2014-06-20 Thread Sean Kelly via Digitalmars-d

On Friday, 20 June 2014 at 07:13:24 UTC, Iain Buclaw wrote:

Hi,

I've been seeing a problem on the Debian X32 build system where 
unittest process just hangs, and require manual intervention by 
the poor maintainer to kill the process manually before the 
build fails due to inactivity.


Haven't yet managed to reduce the problem (it only happens on a 
native X32 system, but not when running X32 under native 
x86_64), but thought it would be a good idea to suggest that 
any thread related tests should be safely handled by self 
terminating after a period of waiting.


Thoughts from the phobos maintainers?


I'm surprised that there are thread-related tests that deadlock.
All the ones I wrote time out for exactly this reason.  Of
course, getting the timings right can be a pain, so there's no
perfect solution.


Re: Thanks for the bounty!

2014-06-20 Thread Craig Dillabaugh via Digitalmars-d
On Thursday, 19 June 2014 at 22:27:58 UTC, Andrej Mitrovic via 
Digitalmars-d wrote:
I claimed a bounty recently, and I just wanted to say thanks to 
Andrei and

his company for backing the bounty.

I won't be able to take any future bounties from Facebook due 
to internal
competition policies, but that's ok as I'm now a paid 
programmer anyway. :)


It was fun to win something while coding! Cheers!


Congratulations on the new job.


Re: Set-up timeouts on thread-related unittests

2014-06-20 Thread Steven Schveighoffer via Digitalmars-d
On Fri, 20 Jun 2014 13:13:30 -0400, Iain Buclaw via Digitalmars-d  
digitalmars-d@puremagic.com wrote:



On 20 Jun 2014 16:00, Steven Schveighoffer via Digitalmars-d 
digitalmars-d@puremagic.com wrote:


On Fri, 20 Jun 2014 03:13:23 -0400, Iain Buclaw ibuc...@gdcproject.org

wrote:



Hi,

I've been seeing a problem on the Debian X32 build system where  
unittest
process just hangs, and require manual intervention by the poor  
maintainer

to kill the process manually before the build fails due to inactivity.


Haven't yet managed to reduce the problem (it only happens on a native

X32 system, but not when running X32 under native x86_64), but thought it
would be a good idea to suggest that any thread related tests should be
safely handled by self terminating after a period of waiting.


Thoughts from the phobos maintainers?



This could probably be implemented quite simply in druntime.

I'd be hesitant to make it default, but it would be nice to tag unit
tests as having a maximum timeout. Yet another case for using attributes  
on

unit tests and RTInfo for modules...




I don't see a problem using it as default for these.


No, I mean that druntime would run all unit tests with an expectation that  
each unit test should time out after N seconds.


I think it's much cleaner and less error prone to implement the timeout in  
the unittest runner than in the unit test itself.


But of course, there might be exceptions, we can't put those restrictions  
on all code. A nice feature would be if the default was to have a timeout  
of say 1 second, and then allow users to specify an alternate/infinite  
timeout based on a UDA.


-Steve


Re: ANTLR grammar for D?

2014-06-20 Thread Brian Schott via Digitalmars-d

On Friday, 20 June 2014 at 09:22:07 UTC, Wesley Hamilton wrote:
Thanks. Just realized that the add grammar button for ANTLR 
grammar list is broken... so that could be why it's not there. 
I'll probably still finish the grammar I'm making since I'm 75% 
done. That's a great reference, though. I think it's missing a 
few minor details like delimited strings, token strings, and 
assembly keywords.


Keep in mind that assembly keywords aren't keywords outside of 
ASM blocks. You need your lexer to identify them as identifiers.


It should help where the Language Reference pages aren't 
accurate. For example, I think HexLetter is incorrectly defined.


If you find problems in the grammar please file an issue on 
Github or create a pull request.


If you need the AST of some D code, you'll save a lot of time by 
downloading D-Scanner and running dscanner --ast sourcecode.d  
sourcecode_ast.xml


Re: Perlin noise benchmark speed

2014-06-20 Thread Mattcoder via Digitalmars-d

On Friday, 20 June 2014 at 16:02:56 UTC, bearophile wrote:

So this is the best so far version:

http://dpaste.dzfl.pl/8dae9b359f27


Just one note, with the last version of DMD:

dmd -O -noboundscheck -inline -release pnoise.d
pnoise.d(42): Error: pure function 
'pnoise.Noise2DContext.getGradients' cannot c

all impure function 'core.stdc.math.floor'
pnoise.d(43): Error: pure function 
'pnoise.Noise2DContext.getGradients' cannot c

all impure function 'core.stdc.math.floor'

Matheus.


Re: Set-up timeouts on thread-related unittests

2014-06-20 Thread Sean Kelly via Digitalmars-d

On Friday, 20 June 2014 at 18:24:21 UTC, Steven Schveighoffer
wrote:


No, I mean that druntime would run all unit tests with an 
expectation that each unit test should time out after N seconds.


I'd be more inclined to have the test runner kill the process if
it takes more than N seconds to complete and call that a test
failure.  Figuring out something reasonable to do in the unit
tester within Druntime would be difficult.


Re: Perlin noise benchmark speed

2014-06-20 Thread Mattcoder via Digitalmars-d

On Friday, 20 June 2014 at 18:32:22 UTC, dennis luehring wrote:
it does not makes sense to optmized this example more and 
more - it should be fast with the original version (except the 
missing finals on the virtuals)


Oh please, let him continue, I'm really learning a lot with these 
optimizations.


Matheus.


Re: Perlin noise benchmark speed

2014-06-20 Thread dennis luehring via Digitalmars-d

Am 20.06.2014 17:09, schrieb bearophile:

Nick Treleaven:


A Perlin noise benchmark was quoted in this reddit thread:

http://www.reddit.com/r/rust/comments/289enx/c0de517e_where_is_my_c_replacement/cibn6sr


This should be compiled with LDC2, it's more idiomatic and a
little faster than the original D version:
http://dpaste.dzfl.pl/8d2ff04b62d3

I have already seen that if I inline Noise2DContext.get in the
main manually the program gets faster (but not yet fast enough).

Bye,
bearophile



it does not makes sense to optmized this example more and more - it 
should be fast with the original version (except the missing finals on 
the virtuals)


Re: ANTLR grammar for D?

2014-06-20 Thread Wesley Hamilton via Digitalmars-d

On Friday, 20 June 2014 at 18:20:36 UTC, Brian Schott wrote:

On Friday, 20 June 2014 at 09:22:07 UTC, Wesley Hamilton wrote:
Thanks. Just realized that the add grammar button for ANTLR 
grammar list is broken... so that could be why it's not there. 
I'll probably still finish the grammar I'm making since I'm 
75% done. That's a great reference, though. I think it's 
missing a few minor details like delimited strings, token 
strings, and assembly keywords.


Keep in mind that assembly keywords aren't keywords outside of 
ASM blocks. You need your lexer to identify them as identifiers.


It should help where the Language Reference pages aren't 
accurate. For example, I think HexLetter is incorrectly 
defined.


If you find problems in the grammar please file an issue on 
Github or create a pull request.


If you need the AST of some D code, you'll save a lot of time 
by downloading D-Scanner and running dscanner --ast 
sourcecode.d  sourcecode_ast.xml


My intent is to develop a language based on D and a compiler to 
go with it. I've done something similar using ANTLR once before. 
I might turn it into a BS project.


Re: Perlin noise benchmark speed

2014-06-20 Thread Mattcoder via Digitalmars-d

On Friday, 20 June 2014 at 18:29:35 UTC, Mattcoder wrote:

On Friday, 20 June 2014 at 16:02:56 UTC, bearophile wrote:

So this is the best so far version:

http://dpaste.dzfl.pl/8dae9b359f27


Just one note, with the last version of DMD:

dmd -O -noboundscheck -inline -release pnoise.d
pnoise.d(42): Error: pure function 
'pnoise.Noise2DContext.getGradients' cannot c

all impure function 'core.stdc.math.floor'
pnoise.d(43): Error: pure function 
'pnoise.Noise2DContext.getGradients' cannot c

all impure function 'core.stdc.math.floor'

Matheus.


Sorry, I forgot this:

Beside the error above, which for now I'm using:

immutable float x0f = cast(int)x; //x.floor;
immutable float y0f = cast(int)y; //y.floor;

Just to compile, your version here is twice faster than the 
original one.


Matheus.


Re: ANTLR grammar for D?

2014-06-20 Thread Wesley Hamilton via Digitalmars-d

On Friday, 20 June 2014 at 18:45:15 UTC, Wesley Hamilton wrote:

On Friday, 20 June 2014 at 18:20:36 UTC, Brian Schott wrote:

On Friday, 20 June 2014 at 09:22:07 UTC, Wesley Hamilton wrote:
Thanks. Just realized that the add grammar button for ANTLR 
grammar list is broken... so that could be why it's not 
there. I'll probably still finish the grammar I'm making 
since I'm 75% done. That's a great reference, though. I think 
it's missing a few minor details like delimited strings, 
token strings, and assembly keywords.


Keep in mind that assembly keywords aren't keywords outside of 
ASM blocks. You need your lexer to identify them as 
identifiers.


It should help where the Language Reference pages aren't 
accurate. For example, I think HexLetter is incorrectly 
defined.


If you find problems in the grammar please file an issue on 
Github or create a pull request.


If you need the AST of some D code, you'll save a lot of time 
by downloading D-Scanner and running dscanner --ast 
sourcecode.d  sourcecode_ast.xml


My intent is to develop a language based on D and a compiler to 
go with it. I've done something similar using ANTLR once 
before. I might turn it into a BS project.


I realize assembly instruction keywords aren't actually tokens 
for the lexer. Having a clean ANTLR file that doesn't include 
predicates and language dependent code is nice as a starting 
point, but the parser eventually needs to check validity of the 
asm statements. Identifier DelimitedStrings need predicates too. 
Also, TokenStrings can't be a simple parse rule since the dot 
operator only applies to characters and not tokens.


DIP64: Attribute Cleanup

2014-06-20 Thread Brian Schott via Digitalmars-d

http://wiki.dlang.org/DIP64

Attributes in D have two problems:
1. There are too many of them and declarations are getting too 
verbose

2. New attributes use @ and the old ones do not.

I've created a DIP to address these issues.


Breaking code faster and better

2014-06-20 Thread w0rp via Digitalmars-d
I've been catching up on the Lang.NEXT videos and I just watched 
the one about Hack and converting lots of PHP code to Hack at 
Facebook.


https://channel9.msdn.com/Events/Lang-NEXT/Lang-NEXT-2014/Hack

The essence of the talk is essentially gradually introducing 
strong constraints on a codebase to improve quality. I think the 
most interesting part for us is that about thirty minutes in the 
question is asked, can this be done generally in the industry, 
automatically? He is again talking about something which I think 
is a lot like 'gofix' for Go.


Now Rob Pike pin pointed a point for Go's growth coming from 
fixing on a version 1.0 for the language, so everyone could know 
what was going to happen. So there's evidence of guarantees of 
stability leading to increased adoption, although correlation 
doesn't imply causation. but it does make me wonder. Should we be 
developing ways of saying, We're just going to change this thing 
which isn't so great, here's a warning, run the tool to fix it 
for you, or similar?


I know one of my pet peeves is null pointers and null class 
references, as an example. Either that or an Option/Maybe monad 
are nice to have for when you want to implement linked lists and 
so on, but I think actually the number of times you want to allow 
for something which is either something or nothing are quite 
small. So it would be nice to have tools so we could one day say, 
Nah, we're going to make it harder to create null pointer 
errors, and here's a tool to assist you with that transition.


I just tossed in one hot topic with another one, so I'll duck for 
now to avoid a beheading.


Re: Breaking code faster and better

2014-06-20 Thread w0rp via Digitalmars-d
I don't typically double post, but I just noticed this in another 
thread posted very recently.


http://wiki.dlang.org/DIP64

To aid in this transition a tool will be constructed on top of 
the lexer contained in the D-Scanner project. [ ... ]


So, like this I suppose.


Re: Set-up timeouts on thread-related unittests

2014-06-20 Thread Steven Schveighoffer via Digitalmars-d
On Fri, 20 Jun 2014 14:30:50 -0400, Sean Kelly s...@invisibleduck.org  
wrote:



On Friday, 20 June 2014 at 18:24:21 UTC, Steven Schveighoffer
wrote:


No, I mean that druntime would run all unit tests with an expectation  
that each unit test should time out after N seconds.


I'd be more inclined to have the test runner kill the process if
it takes more than N seconds to complete and call that a test
failure.  Figuring out something reasonable to do in the unit
tester within Druntime would be difficult.


Timing individual tests is more likely to be accurate than timing the  
whole set of unit tests. A slow machine could easily double or triple the  
time the whole thing takes, and it would be difficult to pinpoint a  
reasonable time that all machines would accept. But a single unit test  
block should be really quick, I think 1 second is long enough to say it's  
failed in the vast majority of cases. Of course, if they all take near 1  
second, the total time will be huge. But we are not testing speed, we are  
testing for infinite loops/hangs. The trick is specifying any info to the  
runtime about specific unit tests, we don't have any mechanism to do that.  
UDAs would be perfect.


I don't think it would be that difficult. you just need a separate thread  
(can be written in D but only use C runtime), that exits the process if it  
doesn't get pinged properly. Then the test runner has to ping the thread  
between each test.


-Steve


Re: DIP64: Attribute Cleanup

2014-06-20 Thread H. S. Teoh via Digitalmars-d
On Fri, Jun 20, 2014 at 07:22:02PM +, Brian Schott via Digitalmars-d wrote:
 http://wiki.dlang.org/DIP64
 
 Attributes in D have two problems:
 1. There are too many of them and declarations are getting too verbose
 2. New attributes use @ and the old ones do not.
 
 I've created a DIP to address these issues.

And while we're at it, why not also fix holes in attribute semantics on
top of just fixing syntax?

First, there is no way to mark a function as *impure* as opposed to pure
(leaving out pure is not an option in template functions due to
automatic attribute inference). Also, there's an inconsistency between
positive attributes (pure, safe) vs. negative attributes (nothrow,
nogc). So ideally, the new syntax should allow you to specify both pure
and impure, and ideally, it should not special-case on peculiarities of
the English language (pure/impure vs. throw/nothrow). So it should be
something like @pure, @!pure, @throw, @!throw, @gc, @!gc, etc., for
maximum consistency.

I also like your attribute sets idea. This could be the solution we're
looking for with transitive attributes (aka inout(pure), inout(nothrow),
etc.). If there was some syntax for attribute set intersection, say
@a*@b, then we could specify that the attribute set of some given
function f() is the intersection of the attribute sets of its input
delegates. For example:

// This is hypothetical syntax, I'm sure you can think of a
// better way to write this.
int dgCaller(int delegate(int) @a dg1, int delegate(int) @b dg2)
@this = @a*@b // specifies that this function's
  // attributes is the intersection of @a and @b
{
if (someCondition)
return dg1(1);
else
return dg2(2);
}


T

-- 
Heads I win, tails you lose.


Re: DIP64: Attribute Cleanup

2014-06-20 Thread Meta via Digitalmars-d

On Friday, 20 June 2014 at 19:22:04 UTC, Brian Schott wrote:

http://wiki.dlang.org/DIP64

Attributes in D have two problems:
1. There are too many of them and declarations are getting too 
verbose

2. New attributes use @ and the old ones do not.

I've created a DIP to address these issues.


Does this work for all attributes? For example:

@OneTo5 = @(1) @(2) @(3) @(4) @(5);


And will this be possible?

struct Test
{
string str;
}

@Tattr(str) = @Test(str);
@Tattr = @Test();


Re: DIP64: Attribute Cleanup

2014-06-20 Thread Brad Anderson via Digitalmars-d

On Friday, 20 June 2014 at 19:22:04 UTC, Brian Schott wrote:

http://wiki.dlang.org/DIP64

Attributes in D have two problems:
1. There are too many of them and declarations are getting too 
verbose

2. New attributes use @ and the old ones do not.

I've created a DIP to address these issues.


I like it.

Just thinking aloud, it could be interesting to allow compile 
time logic of some sort (both on the arguments and on the symbol 
the attribute is being attached to).


Contrived example borrowing template syntax (which could almost 
certainly be improved upon):


template @pureIfNameHasPure(Sym) {
static if(__traits(identifier, Sym).canFind(Pure))
alias @pureIfNameHasPure = @pure;
else
alias @pureIfNameHasPure = /* nothing...not sure how to 
show that */;


}


Re: DIP64: Attribute Cleanup

2014-06-20 Thread Gary Willoughby via Digitalmars-d

On Friday, 20 June 2014 at 19:22:04 UTC, Brian Schott wrote:

http://wiki.dlang.org/DIP64

Attributes in D have two problems:
1. There are too many of them and declarations are getting too 
verbose

2. New attributes use @ and the old ones do not.

I've created a DIP to address these issues.


They do need standardising but i don't like the idea of attribute 
sets. Attribute sets would make attributes too over complicated 
to understand. Attributes need to be simple and concise which i 
think they already are.


Re: Set-up timeouts on thread-related unittests

2014-06-20 Thread David Nadlinger via Digitalmars-d
On Friday, 20 June 2014 at 19:44:18 UTC, Steven Schveighoffer 
wrote:
Timing individual tests is more likely to be accurate than 
timing the whole set of unit tests. A slow machine could easily 
double or triple the time the whole thing takes, and it would 
be difficult to pinpoint a reasonable time that all machines 
would accept.


That's true if you expect the timeout to be hit as part of 
regular testing. If it's only to keep the auto tester from 
hanging, just setting a one-minute global timeout per test case 
(or something like that) should be fine. Sure, the auto-tester 
throughput would suffer somewhat as long as the build is broken, 
but…


David


Re: Redesign of dlang.org

2014-06-20 Thread w0rp via Digitalmars-d
I thought I'd post an update on this work so I don't leave any 
people hanging who are wondering about this.


I haven't had the time in the last week or so to work on this as 
I have been very busy recently with other responsibilities. With 
my day job in particular, I will be in Berlin most of next week 
hosting a conference with my company. Because it's a small world, 
by pure coincidence Sociomantic are actually sponsoring this 
conference. (I doubt the marketing guys talk to the D programmers 
that much, though.)


http://www.performancemarketinginsights.com/14/europe/sponsor/sociomantic/

However, after I get back I will defintely work on this redesign 
some more. Unless somebody pins me down, I'll keep going with it. 
I only just realised that I made the terrible mistake of not 
pushing my most recent changes, so you can find them now on 
GitHub.


I started working on discovering Markdown files at runtime for 
filling in a sections of pages, like center part of changelog 
pages, and generating the table of contents from the HTML 
automatically. Because the prospect of parsing HTML myself scares 
me, I just copied in Adam D Ruppe's DOM library for that. I hope 
that I have properly attributed the authors for that code, please 
advise me if I should add any additional attribution in there 
otherwise.


The early result of that I think has been pretty good. I can edit 
a file and see a change pretty quickly. I lost the functions I 
was using to generating the Bugzilla links, but I'm going to 
replacing them with links to /bug/id instead, and making that 
page redirect to the right place, just so I can make my Markdown 
files a little smaller. The amount of memory consumed while 
compiling is better, going from something like 2GB to 1.2GB, 
because there aren't tons of Diet templates to build in anymore. 
The Markdown parsing is just the library functionality that comes 
with vibe.d.


I totally have not converted all of the pages from Diet to 
Markdown yet. In fact, if you run the site as is now, you can't 
see any of the changelog pages except 2.000 anymore. Plus, you 
could perhaps do something involving caching files instead of 
loading them from the drive all the time, but I just want 
something that works for now.


So there's my update for now.


Re: Compiler generated assertion error message

2014-06-20 Thread Jacob Carlborg via Digitalmars-d

On 2014-06-19 19:52, Dicebot wrote:

On a related topic:

Feature like this is extremely convenient for unit tests. However using
assertions in unit test blocks does not fit well with any custom test
runner that does not immediately terminate the application (because
AssertionError is an Error).


There's an assert handler in druntime [1], but that expects the 
implementation to be nothrow, so you cannot throw an exception.



I'd personally love some way to get such formatted expression for any
library function.

What is the official stance on this?


[1] 
https://github.com/D-Programming-Language/druntime/blob/master/src/core/exception.d#L374


--
/Jacob Carlborg


Re: DIP64: Attribute Cleanup

2014-06-20 Thread w0rp via Digitalmars-d

On Friday, 20 June 2014 at 19:22:04 UTC, Brian Schott wrote:

http://wiki.dlang.org/DIP64

Attributes in D have two problems:
1. There are too many of them and declarations are getting too 
verbose

2. New attributes use @ and the old ones do not.

I've created a DIP to address these issues.


It may be worth splitting things up a little, or perhaps the 
extra parts in the DIP are your 'duck.' Because I think 
normalising every attribute to @ syntax is good. I look at it and 
think, Yeah, good. Especially so if it also means that user 
defined attributes can also be on both sides of a function 
signature, as that would ease transition between different 
versions of the language.


I think the parts in the DIP about the exact semantics or syntax 
for composing attributes will be debated a bit, but the Let's 
just put @ in there part is pretty straightforward.


Re: Compiler generated assertion error message

2014-06-20 Thread Dicebot via Digitalmars-d

On Friday, 20 June 2014 at 20:04:32 UTC, Jacob Carlborg wrote:

On 2014-06-19 19:52, Dicebot wrote:

On a related topic:

Feature like this is extremely convenient for unit tests. 
However using
assertions in unit test blocks does not fit well with any 
custom test
runner that does not immediately terminate the application 
(because

AssertionError is an Error).


There's an assert handler in druntime [1], but that expects the 
implementation to be nothrow, so you cannot throw an exception.


Yes I have already found it. There is also 
https://github.com/D-Programming-Language/druntime/blob/master/src/core/exception.d#L447 
but I don't see any way to replace it with user handler.


Anyway some sort of library solution (probably via __traits) is 
much more desired because that will be applicable also to things 
like std.exception.enforce and alike.


Re: DIP64: Attribute Cleanup

2014-06-20 Thread Brian Schott via Digitalmars-d
On Friday, 20 June 2014 at 19:48:49 UTC, H. S. Teoh via 
Digitalmars-d wrote:
First, there is no way to mark a function as *impure* as 
opposed to pure
(leaving out pure is not an option in template functions due 
to
automatic attribute inference). Also, there's an inconsistency 
between
positive attributes (pure, safe) vs. negative attributes 
(nothrow,
nogc). So ideally, the new syntax should allow you to specify 
both pure
and impure, and ideally, it should not special-case on 
peculiarities of
the English language (pure/impure vs. throw/nothrow). So it 
should be
something like @pure, @!pure, @throw, @!throw, @gc, @!gc, etc., 
for

maximum consistency.


I can see this being useful. We'd just have to decide what it 
means to negate an attribute with arguments. (e.g. 
`@!name(bob)`)


Also in the case of @!throw we'd have to modify the definition of 
attributes to accept the throw token instead of just 
identifiers. Maybe converting nothrow to @!throws would be 
better.


I also like your attribute sets idea. This could be the 
solution we're
looking for with transitive attributes (aka inout(pure), 
inout(nothrow),
etc.). If there was some syntax for attribute set intersection, 
say
@a*@b, then we could specify that the attribute set of some 
given
function f() is the intersection of the attribute sets of its 
input

delegates. For example:

// This is hypothetical syntax, I'm sure you can think of a
// better way to write this.
	int dgCaller(int delegate(int) @a dg1, int delegate(int) @b 
dg2)

@this = @a*@b // specifies that this function's
  // attributes is the intersection of @a and @b
{
if (someCondition)
return dg1(1);
else
return dg2(2);
}


T


Is that use case common enough to justify complicating the 
compiler?




Re: Perlin noise benchmark speed

2014-06-20 Thread bearophile via Digitalmars-d

Mattcoder:


Just one note, with the last version of DMD:


Yes, I know, at the top of the file I have specified it's for 
ldc2.


Bye,
bearophile


Re: Set-up timeouts on thread-related unittests

2014-06-20 Thread Iain Buclaw via Digitalmars-d
On 20 June 2014 19:08, Sean Kelly via Digitalmars-d
digitalmars-d@puremagic.com wrote:
 On Friday, 20 June 2014 at 07:13:24 UTC, Iain Buclaw wrote:

 Hi,

 I've been seeing a problem on the Debian X32 build system where unittest
 process just hangs, and require manual intervention by the poor maintainer
 to kill the process manually before the build fails due to inactivity.

 Haven't yet managed to reduce the problem (it only happens on a native X32
 system, but not when running X32 under native x86_64), but thought it would
 be a good idea to suggest that any thread related tests should be safely
 handled by self terminating after a period of waiting.

 Thoughts from the phobos maintainers?


 I'm surprised that there are thread-related tests that deadlock.
 All the ones I wrote time out for exactly this reason.  Of
 course, getting the timings right can be a pain, so there's no
 perfect solution.


From my experience deadlocks in the unittest program have been because
of either problems with core.thread or std.parallelism tests.  I am
yet to narrow it down though, so it's just a stab in the dark as to
what the problem may be.


Re: Perlin noise benchmark speed

2014-06-20 Thread bearophile via Digitalmars-d

Mattcoder:


Beside the error above, which for now I'm using:

immutable float x0f = cast(int)x; //x.floor;
immutable float y0f = cast(int)y; //y.floor;

Just to compile,


If you remove the calls to floor, you are avoiding the main 
problem to fix.


Bye,
bearohile


Re: Perlin noise benchmark speed

2014-06-20 Thread bearophile via Digitalmars-d

dennis luehring:

it does not makes sense to optmized this example more and 
more - it should be fast with the original version


But the original code is not fast. So someone has to find what's 
broken. I have shown part of the broken parts to fix (floor on 
ldc2).


Also, the original code is not written in a fully idiomatic way, 
also because unfortunately today the lazy way to write D code 
is not always the best/right way (example: you have to add ton of 
immutable/const, and annotations, because immutability is not the 
default), so a code fix is good.


Bye,
bearophile


Re: Adding the ?. null verification

2014-06-20 Thread Jacob Carlborg via Digitalmars-d

On 2014-06-19 16:37, Craig Dillabaugh wrote:


Is this any better than?

if(!a) a = b;


I would say it's about the same as a ?? b is better than a ? a : b. 
It's get better since you can use it directly in a return statement:


void a ()
{
return a ??= new Object;
}

--
/Jacob Carlborg


Re: Tail pad optimization, cache friendlyness and C++ interrop

2014-06-20 Thread Dicebot via Digitalmars-d
On Thursday, 19 June 2014 at 11:12:46 UTC, Artur Skawina via 
Digitalmars-d wrote:
Wait what? Do you know a single person who decided to not work 
on DMD FE because of kind of formally (but not practically) 
non-free backend?


Well, do you think I would have said what I did if this issue 
didn't

affect /me/? [1]

...

And, yes, some people really always check licenses, even before 
fully
determining what a software project actually is/does. Because 
if the
license is problematic then everything else is irrelevant -- 
the project
simply is unusable, and any time spent looking at it would be 
wasted.


That is fortunately not a problem for dmdfe, as boost/gpl 
should be
ok for (almost) everyone. But the cost of having to deal with 
another
license, for a bundled part, that you're never going to use and 
are not
even interested in, is there. The cost of scratching-an-itch 
also

becomes higher. Depending on person/context, these costs can be
prohibitive.

artur


I still don't understand. What impact backend license has on you? 
In other words, what is potential danger you need to be concerned 
about that makes potential contributions too risky? One problem I 
am aware of is redistribution issue which is common blocker with 
getting into linux distributions. But personal contributions? Can 
you explain it in a bit more details?


Re: DIP64: Attribute Cleanup

2014-06-20 Thread Steven Schveighoffer via Digitalmars-d
On Fri, 20 Jun 2014 15:47:07 -0400, H. S. Teoh via Digitalmars-d  
digitalmars-d@puremagic.com wrote:


On Fri, Jun 20, 2014 at 07:22:02PM +, Brian Schott via Digitalmars-d  
wrote:

http://wiki.dlang.org/DIP64

Attributes in D have two problems:
1. There are too many of them and declarations are getting too verbose
2. New attributes use @ and the old ones do not.

I've created a DIP to address these issues.


And while we're at it, why not also fix holes in attribute semantics on
top of just fixing syntax?

First, there is no way to mark a function as *impure* as opposed to pure
(leaving out pure is not an option in template functions due to
automatic attribute inference). Also, there's an inconsistency between
positive attributes (pure, safe) vs. negative attributes (nothrow,
nogc). So ideally, the new syntax should allow you to specify both pure
and impure, and ideally, it should not special-case on peculiarities of
the English language (pure/impure vs. throw/nothrow). So it should be
something like @pure, @!pure, @throw, @!throw, @gc, @!gc, etc., for
maximum consistency.


I like the idea, but seeing as how attribute sets already take arguments,  
it's natural to add them to builtins:


@pure(true) == @pure
@pure(false) == not @pure

-Steve


Re: DIP64: Attribute Cleanup

2014-06-20 Thread Steven Schveighoffer via Digitalmars-d
On Fri, 20 Jun 2014 15:22:02 -0400, Brian Schott briancsch...@gmail.com  
wrote:



http://wiki.dlang.org/DIP64

Attributes in D have two problems:
1. There are too many of them and declarations are getting too verbose
2. New attributes use @ and the old ones do not.

I've created a DIP to address these issues.


I like it.

At first, I thought hm.., every project is going to have their own  
definition for @safe @pure @nothrow, but we can put one in druntime  
common sets that everyone should use, and we already allow custom  
attributes anyway that have to be looked up.


One thing this will make slightly more difficult is looking for e.g.  
@trusted functions, because you can't just grep for them. However, I think  
with DScanner, you can probably find things easy enough.


2 thoughts:

1. On H.S.Teoh's idea to add negation, what does foo() @pure !@pure mean  
(or if my preferred syntax was accepted, @pure @pure(false) )?

2. What does this print?

@myattr = @safe @pure;

void foo() @myattr {}

pragma(msg, (foo).typeof);

-Steve


Re: Adding the ?. null verification

2014-06-20 Thread Jacob Carlborg via Digitalmars-d

On 2014-06-18 21:36, H. S. Teoh via Digitalmars-d wrote:


Here's a first stab at a library solution:


I thought of adding a field to indicate if a value if present or not. If 
the value is accessed when it's not present it would assert/throw.


--
/Jacob Carlborg


Re: Perlin noise benchmark speed

2014-06-20 Thread bearophile via Digitalmars-d

Nick Treleaven:


A Perlin noise benchmark was quoted in this reddit thread:


And a simple benchmark for D ranges/parallelism:

Bye,
bearophile


Re: DIP64: Attribute Cleanup

2014-06-20 Thread Timon Gehr via Digitalmars-d

On 06/20/2014 09:22 PM, Brian Schott wrote:

http://wiki.dlang.org/DIP64

Attributes in D have two problems:
1. There are too many of them and declarations are getting too verbose
2. New attributes use @ and the old ones do not.

I've created a DIP to address these issues.


Why not make the built-in attributes proper symbols instead and use

alias Seq(T...)=T;
alias spiffy = Seq!(pure,nothrow,safe);

float mul(float a, float b) @spiffy{ }

?

This will also allow use cases such as passing attributes by alias.


Re: Perlin noise benchmark speed

2014-06-20 Thread bearophile via Digitalmars-d

Nick Treleaven:


A Perlin noise benchmark was quoted in this reddit thread:


And a simple benchmark for D ranges/parallelism:

http://www.reddit.com/r/programming/comments/28mub4/clash_of_the_lambdas_comparing_lambda_performance/

Bye,
bearophile


  1   2   >