Re: Goldie Parsing System v0.4 Released - Now for D2

2011-04-16 Thread Nick Sabalausky
Nick Sabalausky a@a.a wrote in message 
news:ioanmi$82c$1...@digitalmars.com...
 Andrej Mitrovic Wrote:

 What I meant was that code like this will throw if MyType isn't
 defined anywhere:

 int main(int x)
 {
 MyType var;
 }

 goldie.exception.UnexpectedTokenException@src\goldie\exception.d(35):
 test.c(3:12): Unexpected Id: 'var'

 It looks like valid C /syntax/, except that MyType isn't defined. But
 this will work:
 struct MyType {
int field;
 };
 int main(int x)
 {
 struct MyType var;
 }

 So either Goldie or ParseAnything needs to have all types defined.
 Maybe this is obvious, but I wouldn't know since I've never used a
 parser before. :p

 Oddly enough, this one will throw:
 typedef struct {
 int field;
 } MyType;
 int main(int x)
 {
 MyType var;
 }

 goldie.exception.UnexpectedTokenException@src\goldie\exception.d(35):
 test.c(7:12): Unexpected Id: 'var'

 This one will throw as well:
 struct SomeStruct {
 int field;
 };
 typedef struct SomeStruct MyType;
 int main(int x)
 {
 MyType var;
 }

 goldie.exception.UnexpectedTokenException@src\goldie\exception.d(35):
 test.c(13:12): Unexpected Id: 'myvar'

 Isn't typedef a part of ANSI C?

 I'm not at my computer right now, so I can't check, but it sounds like the 
 grammar follows the really old C-style of requiring structs to be declared 
 with struct StructName varName. Apperently it doesn't take into account 
 the possibility of typedefs being used to eliminate that. When I get home, 
 I'll check, I think it may be an easy change to the grammar.


Yea, turns out that grammar just doesn't support using user-defined types 
without preceding them with struct, union, or enum. You can see that 
here:

Var Decl ::= Mod Type Var Var List  ';'
 |   Type Var Var List  ';'
 | ModVar Var List  ';'

Mod  ::= extern
 | static
 | register
 | auto
 | volatile
 | const

Type ::= Base Pointers

Base ::= Sign Scalar  ! Ie, the built-ins like char, signed int, 
etc...
 | struct Id
 | struct '{' Struct Def '}'
 | union Id
 | union '{' Struct Def '}'
 | enum Id

So when you use MyType instead of struct MyType: It sees MyType, 
assumes it's a variable since it doesn't match any of the Type forms 
above, and then barfs on var because variable1 variable2 isn't valid C 
code. Normally, you'd just add another form to Base (Ie, add a line after 
  | enum Id that says   | Id ). Except, the problem is...

C is notorious for types and variables being ambiguous with each other. So 
the distinction pretty much has to be done in the semantic phase (ie, 
outside of the formal grammar). But this grammar seems to be trying to make 
that distinction anyway. So trying to fix it by just simply adding a Base 
::= Id leads to ambiguity problems with types versus variables/expressions. 
That's probably why they didn't enhance the grammar that far - their 
separation of type and variable approach doesn't really work for C.

I'll have to think a bit on how best to adjust it. You can also check the 
GOLD mailing lists here to see if anyone has another C grammar:

http://www.devincook.com/goldparser/contact.htm





Re: Goldie Parsing System v0.4 Released - Now for D2

2011-04-16 Thread Kagamin
Nick Sabalausky Wrote:

 Yea, turns out that grammar just doesn't support using user-defined types 
 without preceding them with struct, union, or enum. You can see that 
 here:
 
 Var Decl ::= Mod Type Var Var List  ';'
  |   Type Var Var List  ';'
  | ModVar Var List  ';'
 
 Mod  ::= extern
  | static
  | register
  | auto
  | volatile
  | const
 
 Type ::= Base Pointers
 
 Base ::= Sign Scalar  ! Ie, the built-ins like char, signed int, 
 etc...
  | struct Id
  | struct '{' Struct Def '}'
  | union Id
  | union '{' Struct Def '}'
  | enum Id
 
 So when you use MyType instead of struct MyType: It sees MyType, 
 assumes it's a variable since it doesn't match any of the Type forms 
 above, and then barfs on var because variable1 variable2 isn't valid C 
 code. Normally, you'd just add another form to Base (Ie, add a line after 
   | enum Id that says   | Id ). Except, the problem is...
 
 C is notorious for types and variables being ambiguous with each other.

As I understand, Type is a type, Var is a variable. There should be no 
problem here.


Re: Goldie Parsing System v0.4 Released - Now for D2

2011-04-16 Thread Nick Sabalausky
Kagamin s...@here.lot wrote in message 
news:iod552$rbe$1...@digitalmars.com...
 Nick Sabalausky Wrote:

 Yea, turns out that grammar just doesn't support using user-defined types
 without preceding them with struct, union, or enum. You can see 
 that
 here:

 Var Decl ::= Mod Type Var Var List  ';'
  |   Type Var Var List  ';'
  | ModVar Var List  ';'

 Mod  ::= extern
  | static
  | register
  | auto
  | volatile
  | const

 Type ::= Base Pointers

 Base ::= Sign Scalar  ! Ie, the built-ins like char, signed 
 int,
 etc...
  | struct Id
  | struct '{' Struct Def '}'
  | union Id
  | union '{' Struct Def '}'
  | enum Id

 So when you use MyType instead of struct MyType: It sees MyType,
 assumes it's a variable since it doesn't match any of the Type forms
 above, and then barfs on var because variable1 variable2 isn't valid 
 C
 code. Normally, you'd just add another form to Base (Ie, add a line 
 after
   | enum Id that says   | Id ). Except, the problem is...

 C is notorious for types and variables being ambiguous with each other.

 As I understand, Type is a type, Var is a variable. There should be no 
 problem here.

First of all, the name Var up there is misleading. That only refers the 
the name of the variable in the variable's declaration. When actually 
*using* a variable, that's a Value, which is defined like this:

Value  ::= OctLiteral
   | HexLiteral
   | DecLiteral
   | StringLiteral
   | CharLiteral
   | FloatLiteral
   | Id '(' Expr ')'   ! Function call
   | Id '(' ')' ! Function call
   | Id   ! Use a variable
   | '(' Expr ')'

So we have a situation like this:

Type ::= Base
Base ::= Id
Value ::= Id

So when the parser encounters an Id, how does it know whether to reduce it 
to a Base or a Value? Since they can both appear in the same place (Ex: 
Immediately after a left curly-brace, such as at the start of a function 
body), there's no way to tell.

Worse, suppose it comes across this:

x*y

If x is a variable, then that's a multiplication. If x is a type then it's a 
pointer declaration. Is it supposed to be multiplication or a declaration? 
Could be either. They're both permitted in the same place.





Re: Goldie Parsing System v0.4 Released - Now for D2

2011-04-16 Thread Nick Sabalausky
Nick Sabalausky a@a.a wrote in message 
news:iod6fn$tch$1...@digitalmars.com...
 Kagamin s...@here.lot wrote in message 
 news:iod552$rbe$1...@digitalmars.com...

 As I understand, Type is a type, Var is a variable. There should be 
 no problem here.

 First of all, the name Var up there is misleading. That only refers the 
 the name of the variable in the variable's declaration. When actually 
 *using* a variable, that's a Value, which is defined like this:

 Value  ::= OctLiteral
   | HexLiteral
   | DecLiteral
   | StringLiteral
   | CharLiteral
   | FloatLiteral
   | Id '(' Expr ')'   ! Function call
   | Id '(' ')' ! Function call
   | Id   ! Use a variable
   | '(' Expr ')'

 So we have a situation like this:

 Type ::= Base
 Base ::= Id
 Value ::= Id

 So when the parser encounters an Id, how does it know whether to reduce it 
 to a Base or a Value? Since they can both appear in the same place 
 (Ex: Immediately after a left curly-brace, such as at the start of a 
 function body), there's no way to tell.

 Worse, suppose it comes across this:

 x*y

 If x is a variable, then that's a multiplication. If x is a type then it's 
 a pointer declaration. Is it supposed to be multiplication or a 
 declaration? Could be either. They're both permitted in the same place.


In other words, we basically have a form of this:

A ::= B | C
B ::= X
C ::= X

Can't be done. No way to tell if X is B or C.




Re: Goldie Parsing System v0.4 Released - Now for D2

2011-04-16 Thread Nick Sabalausky
Nick Sabalausky a@a.a wrote in message 
news:iobh9o$1d04$1...@digitalmars.com...
 Nick Sabalausky a@a.a wrote in message 
 news:ioanmi$82c$1...@digitalmars.com...
 Andrej Mitrovic Wrote:

 What I meant was that code like this will throw if MyType isn't
 defined anywhere:

 int main(int x)
 {
 MyType var;
 }

 goldie.exception.UnexpectedTokenException@src\goldie\exception.d(35):
 test.c(3:12): Unexpected Id: 'var'

 It looks like valid C /syntax/, except that MyType isn't defined. But
 this will work:
 struct MyType {
int field;
 };
 int main(int x)
 {
 struct MyType var;
 }

 So either Goldie or ParseAnything needs to have all types defined.
 Maybe this is obvious, but I wouldn't know since I've never used a
 parser before. :p

 Oddly enough, this one will throw:
 typedef struct {
 int field;
 } MyType;
 int main(int x)
 {
 MyType var;
 }

 goldie.exception.UnexpectedTokenException@src\goldie\exception.d(35):
 test.c(7:12): Unexpected Id: 'var'

 This one will throw as well:
 struct SomeStruct {
 int field;
 };
 typedef struct SomeStruct MyType;
 int main(int x)
 {
 MyType var;
 }

 goldie.exception.UnexpectedTokenException@src\goldie\exception.d(35):
 test.c(13:12): Unexpected Id: 'myvar'

 Isn't typedef a part of ANSI C?

 I'm not at my computer right now, so I can't check, but it sounds like 
 the grammar follows the really old C-style of requiring structs to be 
 declared with struct StructName varName. Apperently it doesn't take 
 into account the possibility of typedefs being used to eliminate that. 
 When I get home, I'll check, I think it may be an easy change to the 
 grammar.


 Yea, turns out that grammar just doesn't support using user-defined types 
 without preceding them with struct, union, or enum. You can see that 
 here:

 Var Decl ::= Mod Type Var Var List  ';'
 |   Type Var Var List  ';'
 | ModVar Var List  ';'

 Mod  ::= extern
 | static
 | register
 | auto
 | volatile
 | const

 Type ::= Base Pointers

 Base ::= Sign Scalar  ! Ie, the built-ins like char, signed int, 
 etc...
 | struct Id
 | struct '{' Struct Def '}'
 | union Id
 | union '{' Struct Def '}'
 | enum Id

 So when you use MyType instead of struct MyType: It sees MyType, 
 assumes it's a variable since it doesn't match any of the Type forms 
 above, and then barfs on var because variable1 variable2 isn't valid C 
 code. Normally, you'd just add another form to Base (Ie, add a line 
 after   | enum Id that says   | Id ). Except, the problem is...

 C is notorious for types and variables being ambiguous with each other. So 
 the distinction pretty much has to be done in the semantic phase (ie, 
 outside of the formal grammar). But this grammar seems to be trying to 
 make that distinction anyway. So trying to fix it by just simply adding a 
 Base ::= Id leads to ambiguity problems with types versus 
 variables/expressions. That's probably why they didn't enhance the grammar 
 that far - their separation of type and variable approach doesn't really 
 work for C.

 I'll have to think a bit on how best to adjust it. You can also check the 
 GOLD mailing lists here to see if anyone has another C grammar:

 http://www.devincook.com/goldparser/contact.htm


Unfortunately, I think this may require LALR(k). Goldie and GOLD are only 
LALR(1) right now.

I had been under the impression that LALR(1) was sufficient because 
according to the oh-so-useful-in-the-real-world formal literature, any LR(k) 
can *technically* be converted into a *cough* equivalent LR(1). But not 
only is algorithm to do this hidden behind the academic ivory wall, but word 
on the street is that the resulting grammar is gigantic and bears little or 
no resemblance to the original structure (and is therefore essentially 
useless in the real world).

Seems I'm gonna have to add some backtracking or stack-cloning to Goldie, 
probably along with some sort of cycle-detection. (I think I'm starting to 
understand why Walter said he doesn't like to bother with parser generators, 
unngh...)




Re: Floating Point + Threads?

2011-04-16 Thread Robert Jacques

On Fri, 15 Apr 2011 23:22:04 -0400, dsimcha dsim...@yahoo.com wrote:

I'm trying to debug an extremely strange bug whose symptoms appear in a  
std.parallelism example, though I'm not at all sure the root cause is in  
std.parallelism.  The bug report is at  
https://github.com/dsimcha/std.parallelism/issues/1#issuecomment-1011717  
.


Basically, the example in question sums up all the elements of a lazy  
range (actually, std.algorithm.map) in parallel.  It uses  
taskPool.reduce, which divides the summation into work units to be  
executed in parallel.  When executed in parallel, the results of the  
summation are non-deterministic after about the 12th decimal place, even  
though all of the following properties are true:


1.  The work is divided into work units in a deterministic fashion.

2.  Within each work unit, the summation happens in a deterministic  
order.


3.  The final summation of the results of all the work units is done in  
a deterministic order.


4.  The smallest term in the summation is about 5e-10.  This means the  
difference across runs is about two orders of magnitude smaller than the  
smallest term.  It can't be a concurrency bug where some terms sometimes  
get skipped.


5.  The results for the individual tasks, not just the final summation,  
differ in the low-order bits.  Each task is executed in a single thread.


6.  The rounding mode is apparently the same in all of the threads.

7.  The bug appears even on machines with only one core, as long as the  
number of task pool threads is manually set to 0.  Since it's a single  
core machine, it can't be a low level memory model issue.


What could possibly cause such small, non-deterministic differences in  
floating point results, given everything above?  I'm just looking for  
suggestions here, as I don't even know where to start hunting for a bug  
like this.


Well, on one hand floating point math is not cumulative, and running sums  
have many known issues (I'd recommend looking up Khan summation). On the  
hand, it should be repeatably different.
As for suggestions? First and foremost, you should always add small to  
large, so try using iota(n-1,-1,-1) instead of iota(n). Not only should  
the answer be better, but if your error rate goes down, you have a good  
idea of where the problem is. I'd also try isolating your implementation's  
numerics, from the underlying concurrency. i.e. use a task pool of 1 and  
don't let the host thread join it, so the entire job is done by one  
worker. The other thing to try is isolation /removing map and iota from  
the equation.


Re: Try it now

2011-04-16 Thread Jérôme M. Berger
Roman Ivanov wrote:
 == Quote from Jacob Carlborg (d...@me.com)'s article
 On 2011-04-14 18:48, Andrei Alexandrescu wrote:
 On 4/14/11 9:03 AM, Steven Schveighoffer wrote:
 Sometimes, I worry that my unit tests or asserts aren't running. Every
 once in a while, I have to change one to fail to make sure that code is
 compiling (this is especially true when I'm doing version statements or
 templates). It would be nice if there was a -assertprint mode which
 showed asserts actually running (only for the module compiled with that
 switch, of course).
 Could this be achieved within the language?

 Andrei
 Don't know exactly how he wants it to behave but I have have a look one
 of my earlier posts:

 http://www.digitalmars.com/pnews/read.php?server=news.digitalmars.comgroup=digitalmars.Dartnum=134796
 
 I'm somewhat shifting the topic, but it seems strange that unit tests are run 
 when
 you run an executable. Wouldn't it make sense to run them immediately after
 compilation? I mean, what would be the use case where you would want to 
 re-run a
 unit test on the code that's already compiled and tested? This could also 
 solve
 the problem with messages on success, since you can output a success message 
 after
 compilation.
 
 Sorry if I'm missing some obvious issue with this suggestion.

Off the top of my head, I see two reasons why running the tests
separately is a good thing:
 - It allows to run the test in a debugger;
 - Cross-compilation.

Jerome
-- 
mailto:jeber...@free.fr
http://jeberger.free.fr
Jabber: jeber...@jabber.fr


Re: Floating Point + Threads?

2011-04-16 Thread Walter Bright

On 4/15/2011 8:40 PM, Andrei Alexandrescu wrote:

On 4/15/11 10:22 PM, dsimcha wrote:

I'm trying to debug an extremely strange bug whose symptoms appear in a
std.parallelism example, though I'm not at all sure the root cause is in
std.parallelism. The bug report is at
https://github.com/dsimcha/std.parallelism/issues/1#issuecomment-1011717 .


Does the scheduling affect the summation order?


That's a good thought. FP addition results can differ dramatically depending on 
associativity.


Re: Ceylon language

2011-04-16 Thread David Nadlinger

On 4/15/11 8:19 PM, Andrej Mitrovic wrote:

[…] My biggest issue is that
I can't modify variables at compile time. I wish there was some
special CTFE int type which is only visible at compile-time and which
we can use however we want. That way I could keep count of things, for
example I could generate a list of indexes that can then be mixed in
as a string.
[…]
I dunno, CTFE overall seems like a buggy thing where I have to guesswhether 
something will work or not. It's very stress-inducing.


Correct me if I'm wrong, but might it be that you are conflating 
templates and CTFE, which are – although they are both used for 
metaprogramming – two entirely different concepts?


Maybe it helps to think of calling a function at compile time as a gate 
into a parallel universe. In this other world, you can modify variables, 
call functions, etc. as you please, just as if you were executing code 
at runtime (well, inside the more or less implementation-defined 
boundaries of CTFE). The only restriction is that you can only return a 
single value through the gate, there is no other way to influence the 
»compiler« universe from inside the functions you call. More 
specifically, there is no way to manipulate types in the compiler 
universe – although you can instantiate templates in CTFE functions just 
like normal, you will never be able to »return« them back to the outside 
world. Also, be aware of the fact that a CTFE function can just work on 
_values_, like every other runtime function, not on types.


So much for CTFE. Templates, on the other hand, are basically just named 
scopes with a few extra features which happen to make a Turing complete 
language. As such, there will never be something like a runtime variable 
modifiable at compile time in templates, as you asked (at least I hope 
so). The interesting thing about templates is that they allow you to 
work with types themselves. Due to the absence of mutable state, you are 
generally limited to functional techniques, though, which can be 
uncomfortably similar to the proverbial Turing tarpit in some cases 
(although it's surprising how easy it is to write certain algorithms in 
a functional manner once you got the hang of it).


However, there are ways to combine templates and CTFE to your advantage. 
Without any concrete question, it's obviously hard to give a good 
suggestion, but an example would be to use template functions called at 
compile time to produce string mixins:


---
string generateCode(T...)() {
string code;
[… construct a string containing some declarations …]
return code;
}

template Foo(T...) {
alias DoSomethingWithATypeTuple!(T) U;
mixin(generateCode(U));
}
---

You can also shorten this by using delegate literals:
---
template Foo(T...) {
alias DoSomethingWithATypeTuple!(T) U;
mixin({
[…]
return some code generated while having access to T and U;
}());
}
---

Another small template metaprogramming example showing a way to process 
a list of types without needing mutable state. Specifically, it aliases 
a new type tuple to Result which doesn't contain any items where 
Exp.numerator is 0 (you can do the same with eponymous templates). 
TypeTuples are automatically flattened, which allows for a concise 
implementation here.


---
template Process(T...) {
static if (T.length == 0) {
alias TypeTuple!() Result;
} else {
alias T[0] A;
static if (A.Exp.numerator == 0) {
alias Process!(T[1..$]).Result Result;
} else {
alias TypeTuple!(A, Process!(T[1..$]).Result) Result;
}
}
}
---

As for constructing lists of indices and then generating code of them: 
If you need to work on types for generating the list, you could e.g. use 
some recursive template to construct a type tuple of integer literals 
(that name really doesn't fit well), and then process it via CTFE to 
generate code to be mixed in – or whatever you need in your specific 
case. Feel free to ask about any specific problems in d.D.learn.


David


Re: Floating Point + Threads?

2011-04-16 Thread Fawzi Mohamed


On 16-apr-11, at 09:41, Walter Bright wrote:


On 4/15/2011 8:40 PM, Andrei Alexandrescu wrote:

On 4/15/11 10:22 PM, dsimcha wrote:
I'm trying to debug an extremely strange bug whose symptoms appear  
in a
std.parallelism example, though I'm not at all sure the root cause  
is in

std.parallelism. The bug report is at
https://github.com/dsimcha/std.parallelism/issues/ 
1#issuecomment-1011717 .


Does the scheduling affect the summation order?


That's a good thought. FP addition results can differ dramatically  
depending on associativity.


yes, one can avoid this by using a tree algorithm with a fixed  
blocksize, then the results will be the same bothe in single and  
parallel case.

Normally one uses atomic sumation though.
In blip I spent quite a bit of thought on tree like algorithms and  
their parallelization exactly because the parallelize well and are  
independent form the paralleization


Fawzi


Re: Floating Point + Threads?

2011-04-16 Thread Fawzi Mohamed


On 16-apr-11, at 05:22, dsimcha wrote:

I'm trying to debug an extremely strange bug whose symptoms appear  
in a std.parallelism example, though I'm not at all sure the root  
cause is in std.parallelism.  The bug report is at https://github.com/dsimcha/std.parallelism/issues/1 
#issuecomment-1011717 .


Basically, the example in question sums up all the elements of a  
lazy range (actually, std.algorithm.map) in parallel.  It uses  
taskPool.reduce, which divides the summation into work units to be  
executed in parallel.  When executed in parallel, the results of the  
summation are non-deterministic after about the 12th decimal place,  
even though all of the following properties are true:


1.  The work is divided into work units in a deterministic fashion.

2.  Within each work unit, the summation happens in a deterministic  
order.


3.  The final summation of the results of all the work units is done  
in a deterministic order.


4.  The smallest term in the summation is about 5e-10.  This means  
the difference across runs is about two orders of magnitude smaller  
than the smallest term.  It can't be a concurrency bug where some  
terms sometimes get skipped.


5.  The results for the individual tasks, not just the final  
summation, differ in the low-order bits.  Each task is executed in a  
single thread.


6.  The rounding mode is apparently the same in all of the threads.

7.  The bug appears even on machines with only one core, as long as  
the number of task pool threads is manually set to 0.  Since it's a  
single core machine, it can't be a low level memory model issue.


What could possibly cause such small, non-deterministic differences  
in floating point results, given everything above?  I'm just looking  
for suggestions here, as I don't even know where to start hunting  
for a bug like this.


It might be due to context switch of threads, that might push out a  
double out of the higher precision 80-bit fpu register, and loose the  
extra precision.
SSE, or float should not have these problems. gcc has an option to  
always store the result in memory, and avoid the extra precision.
maybe having such an optionin dmd to debug such issues would be a nice  
thing.


Fawzi


Re: Temporarily disable all purity for debug prints

2011-04-16 Thread bearophile
Walter:

 Yes, it allows one to break the purity of the function. The alternative is to 
 use casts (which also breaks the purity) or another compiler switch (which 
 also 
 breaks the purity).

A compiler switch to disable purity doesn't break purity. It just turns all 
pure functions inside the program (or inside the compilation unit) into not 
pure ones. At this point you are allowed to call writeln too, safely. 

A problem: if other already compiled parts of the program use the purity of the 
functions you have just turn into not pure ones with a compiler switch, your 
program goes bad. So I presume you have to compile the code that uses the not 
pure anymore functons.

Bye,
bearophile

Bye,
bearophile


Re: Floating Point + Threads?

2011-04-16 Thread Iain Buclaw
== Quote from Walter Bright (newshou...@digitalmars.com)'s article
 On 4/15/2011 8:40 PM, Andrei Alexandrescu wrote:
  On 4/15/11 10:22 PM, dsimcha wrote:
  I'm trying to debug an extremely strange bug whose symptoms appear in a
  std.parallelism example, though I'm not at all sure the root cause is in
  std.parallelism. The bug report is at
  https://github.com/dsimcha/std.parallelism/issues/1#issuecomment-1011717 .
 
  Does the scheduling affect the summation order?
 That's a good thought. FP addition results can differ dramatically depending 
 on
 associativity.

And not to forget optimisations too. ;)


Re: Twitter hashtag for D?

2011-04-16 Thread Spacen Jasset
On 30/07/2009 10:12, MIURA Masahiro wrote:
 Walter Bright wrote:
 How about #d-lang ?

 #dpl ?
 
 I just tested those two.
 Although noone else uses #d-lang, it seems that twitter.com
 doesn't treat it as a hashtag (because it contains a dash?).
 #dpl gives a few false positives.

So what do people currently use for C and C++ then?


Re: Floating Point + Threads?

2011-04-16 Thread dsimcha

On 4/15/2011 11:40 PM, Andrei Alexandrescu wrote:

On 4/15/11 10:22 PM, dsimcha wrote:

I'm trying to debug an extremely strange bug whose symptoms appear in a
std.parallelism example, though I'm not at all sure the root cause is in
std.parallelism. The bug report is at
https://github.com/dsimcha/std.parallelism/issues/1#issuecomment-1011717
.


Does the scheduling affect the summation order?

Andrei


No.  I realize floating point addition isn't associative, but unless 
there's some detail I'm forgetting about, the ordering is deterministic 
within each work unit and the ordering of the final summation is 
deterministic.


Re: Floating Point + Threads?

2011-04-16 Thread dsimcha

On 4/16/2011 2:09 AM, Robert Jacques wrote:

On Fri, 15 Apr 2011 23:22:04 -0400, dsimcha dsim...@yahoo.com wrote:


I'm trying to debug an extremely strange bug whose symptoms appear in
a std.parallelism example, though I'm not at all sure the root cause
is in std.parallelism. The bug report is at
https://github.com/dsimcha/std.parallelism/issues/1#issuecomment-1011717
.

Basically, the example in question sums up all the elements of a lazy
range (actually, std.algorithm.map) in parallel. It uses
taskPool.reduce, which divides the summation into work units to be
executed in parallel. When executed in parallel, the results of the
summation are non-deterministic after about the 12th decimal place,
even though all of the following properties are true:

1. The work is divided into work units in a deterministic fashion.

2. Within each work unit, the summation happens in a deterministic order.

3. The final summation of the results of all the work units is done in
a deterministic order.

4. The smallest term in the summation is about 5e-10. This means the
difference across runs is about two orders of magnitude smaller than
the smallest term. It can't be a concurrency bug where some terms
sometimes get skipped.

5. The results for the individual tasks, not just the final summation,
differ in the low-order bits. Each task is executed in a single thread.

6. The rounding mode is apparently the same in all of the threads.

7. The bug appears even on machines with only one core, as long as the
number of task pool threads is manually set to 0. Since it's a single
core machine, it can't be a low level memory model issue.

What could possibly cause such small, non-deterministic differences in
floating point results, given everything above? I'm just looking for
suggestions here, as I don't even know where to start hunting for a
bug like this.


Well, on one hand floating point math is not cumulative, and running
sums have many known issues (I'd recommend looking up Khan summation).
On the hand, it should be repeatably different.
As for suggestions? First and foremost, you should always add small to
large, so try using iota(n-1,-1,-1) instead of iota(n). Not only should
the answer be better, but if your error rate goes down, you have a good
idea of where the problem is. I'd also try isolating your
implementation's numerics, from the underlying concurrency. i.e. use a
task pool of 1 and don't let the host thread join it, so the entire job
is done by one worker. The other thing to try is isolation /removing map
and iota from the equation.


Right.  For this example, though, assuming floating point math behaves 
like regular math is a good enough approximation.  The issue isn't that 
the results aren't reasonably accurate.  Furthermore, the results will 
always change slightly depending on how many work units you have.  (I 
even warn in the documentation that floating point addition is not 
associative, though it is approximately associative in the well-behaved 
cases.)


My only concern is whether this non-determinism represents some deep 
underlying bug.  For a given work unit allocation (work unit allocations 
are deterministic and only change when the number of threads changes or 
they're changed explicitly), I can't figure out how scheduling could 
change the results at all.  If I could be sure that it wasn't a symptom 
of an underlying bug in std.parallelism, I wouldn't care about this 
small amount of numerical fuzz.  Floating point math is always inexact 
and parallel summation by its nature can't be made to give the exact 
same results as serial summation.


Re: compile phobos into 64bit -- error!

2011-04-16 Thread Andrei Alexandrescu

On 4/16/11 9:44 AM, David Wang wrote:

Hi, all,

[snip]

What operating system?

Andrei


compile phobos into 64bit -- error!

2011-04-16 Thread David Wang
Hi, all,

I've download the latest dmd  druntime  phobos from gitHub.com;
I copied them into a 32bit folder and a 64bit folder; I combined them
separately into 32bit version and 64bit.

1). 32bit for dmd  druntime  phobos -- passed.
2). 64bit for dmd  druntime -- passed; but phobos -- failed. Please view the
info as follows:

(I change the model of phobos to 64bit:



ifeq (,$(MODEL))
MODEL:=64
endif



)
=
[David@Ocean phobos]$ make -f posix.mak DMD=dmd
make --no-print-directory -f posix.mak OS=linux MODEL=64 BUILD=release
cc -c  -m64 -O3 etc/c/zlib/adler32.c
-ogenerated/linux/release/64/etc/c/zlib/adler32.o
cc -c  -m64 -O3 etc/c/zlib/compress.c
-ogenerated/linux/release/64/etc/c/zlib/compress.o
cc -c  -m64 -O3 etc/c/zlib/crc32.c 
-ogenerated/linux/release/64/etc/c/zlib/crc32.o
cc -c  -m64 -O3 etc/c/zlib/deflate.c
-ogenerated/linux/release/64/etc/c/zlib/deflate.o
cc -c  -m64 -O3 etc/c/zlib/gzclose.c
-ogenerated/linux/release/64/etc/c/zlib/gzclose.o
cc -c  -m64 -O3 etc/c/zlib/gzlib.c 
-ogenerated/linux/release/64/etc/c/zlib/gzlib.o
cc -c  -m64 -O3 etc/c/zlib/gzread.c
-ogenerated/linux/release/64/etc/c/zlib/gzread.o
cc -c  -m64 -O3 etc/c/zlib/gzwrite.c
-ogenerated/linux/release/64/etc/c/zlib/gzwrite.o
cc -c  -m64 -O3 etc/c/zlib/infback.c
-ogenerated/linux/release/64/etc/c/zlib/infback.o
cc -c  -m64 -O3 etc/c/zlib/inffast.c
-ogenerated/linux/release/64/etc/c/zlib/inffast.o
cc -c  -m64 -O3 etc/c/zlib/inflate.c
-ogenerated/linux/release/64/etc/c/zlib/inflate.o
cc -c  -m64 -O3 etc/c/zlib/inftrees.c
-ogenerated/linux/release/64/etc/c/zlib/inftrees.o
cc -c  -m64 -O3 etc/c/zlib/trees.c 
-ogenerated/linux/release/64/etc/c/zlib/trees.o
cc -c  -m64 -O3 etc/c/zlib/uncompr.c
-ogenerated/linux/release/64/etc/c/zlib/uncompr.o
cc -c  -m64 -O3 etc/c/zlib/zutil.c 
-ogenerated/linux/release/64/etc/c/zlib/zutil.o
dmd -I../druntime/import  -w -d -m64 -O -release -nofloat -lib
-ofgenerated/linux/release/64/libphobos2.a ../druntime/lib/libdruntime.a
crc32.d std/algorithm.d std/array.d std/base64.d std/bigint.d std/bitmanip.d
std/compiler.d std/complex.d std/concurrency.d std/container.d std/contracts.d
std/conv.d std/cpuid.d std/cstream.d std/ctype.d std/date.d std/datetime.d
std/datebase.d std/dateparse.d std/demangle.d std/encoding.d std/exception.d
std/file.d std/format.d std/functional.d std/getopt.d std/gregorian.d
std/intrinsic.d std/json.d std/loader.d std/math.d std/mathspecial.d std/md5.d
std/metastrings.d std/mmfile.d std/numeric.d std/outbuffer.d std/path.d
std/perf.d std/process.d std/random.d std/range.d std/regex.d std/regexp.d
std/signals.d std/socket.d std/socketstream.d std/stdint.d std/stdio.d
std/stdiobase.d std/stream.d std/string.d std/syserror.d std/system.d
std/traits.d std/typecons.d std/typetuple.d std/uni.d std/uri.d std/utf.d
std/variant.d std/xml.d std/zip.d std/zlib.d std/c/stdarg.d std/c/stdio.d
etc/c/zlib.d std/internal/math/biguintcore.d std/internal/math/biguintnoasm.d
std/internal/math/biguintx86.d std/internal/math/gammafunction.d
std/internal/math/errorfunction.d etc/c/curl.d std/c/linux/linux.d
std/c/linux/socket.d generated/linux/release/64/etc/c/zlib/adler32.o
generated/linux/release/64/etc/c/zlib/compress.o
generated/linux/release/64/etc/c/zlib/crc32.o
generated/linux/release/64/etc/c/zlib/deflate.o
generated/linux/release/64/etc/c/zlib/gzclose.o
generated/linux/release/64/etc/c/zlib/gzlib.o
generated/linux/release/64/etc/c/zlib/gzread.o
generated/linux/release/64/etc/c/zlib/gzwrite.o
generated/linux/release/64/etc/c/zlib/infback.o
generated/linux/release/64/etc/c/zlib/inffast.o
generated/linux/release/64/etc/c/zlib/inflate.o
generated/linux/release/64/etc/c/zlib/inftrees.o
generated/linux/release/64/etc/c/zlib/trees.o
generated/linux/release/64/etc/c/zlib/uncompr.o
generated/linux/release/64/etc/c/zlib/zutil.o
std.contracts has been scheduled for deprecation. Please use std.exception
instead.
std.date and std.dateparse have been scheduled for deprecation. Please use
std.datetime instead.
std.gregorian has been scheduled for deprecation. Please use std.datetime 
instead.
std.perf has been scheduled for deprecation. Please use std.datetime instead.
std/uni.d(585): Error: cannot implicitly convert expression (table.length -
1LU) of type ulong to uint
std/uri.d(397): Error: template std.string.icmp(alias pred = a  b,S1,S2) if
(is(Unqual!(ElementType!(S1)) == dchar)  is(Unqual!(ElementType!(S2)) ==
dchar)) does not match any function template declaration
std/uri.d(397): Error: template std.string.icmp(alias pred = a  b,S1,S2) if
(is(Unqual!(ElementType!(S1)) == dchar)  is(Unqual!(ElementType!(S2)) ==
dchar)) cannot deduce template function from argument types !()(string,string)
std/uri.d(399): Error: template std.string.icmp(alias pred = a  b,S1,S2) if
(is(Unqual!(ElementType!(S1)) == dchar)  is(Unqual!(ElementType!(S2)) ==
dchar)) does not match any function template declaration
std/uri.d(399): Error: template std.string.icmp(alias 

Re: Floating Point + Threads?

2011-04-16 Thread dsimcha

On 4/16/2011 10:11 AM, dsimcha wrote:

My only concern is whether this non-determinism represents some deep
underlying bug. For a given work unit allocation (work unit allocations
are deterministic and only change when the number of threads changes or
they're changed explicitly), I can't figure out how scheduling could
change the results at all. If I could be sure that it wasn't a symptom
of an underlying bug in std.parallelism, I wouldn't care about this
small amount of numerical fuzz. Floating point math is always inexact
and parallel summation by its nature can't be made to give the exact
same results as serial summation.


Ok, it's definitely **NOT** a bug in std.parallelism.  Here's a reduced 
test case that only uses core.thread, not std.parallelism.  All it does 
is sum an array using std.algorithm.reduce from the main thread, then 
start a new thread to do the same thing and compare answers.  At the 
beginning of the summation function the rounding mode is printed to 
verify that it's the same for both threads.  The two threads get 
slightly different answers.


Just to thoroughly rule out a concurrency bug, I didn't even let the two 
threads execute concurrently.  The main thread produces its result, then 
starts and immediately joins the second thread.


import std.algorithm, core.thread, std.stdio, core.stdc.fenv;

real sumRange(const(real)[] range) {
writeln(Rounding mode:  , fegetround);  // 0 from both threads.
return reduce!a + b(range);
}

void main() {
immutable n = 1_000_000;
immutable delta = 1.0 / n;

auto terms = new real[1_000_000];
foreach(i, ref term; terms) {
immutable x = ( i - 0.5 ) * delta;
term = delta / ( 1.0 + x * x ) * 1;
}

immutable res1 = sumRange(terms);
writefln(%.19f, res1);

real res2;
auto t = new Thread( { res2 = sumRange(terms); } );
t.start();
t.join();
writefln(%.19f, res2);
}


Output:
Rounding mode:  0
0.7853986633972191094
Rounding mode:  0
0.7853986633972437348


Re: Floating Point + Threads?

2011-04-16 Thread dsimcha

On 4/16/2011 10:55 AM, Andrei Alexandrescu wrote:

On 4/16/11 9:52 AM, dsimcha wrote:

Output:
Rounding mode: 0
0.7853986633972191094
Rounding mode: 0
0.7853986633972437348


I think at this precision the difference may be in random bits. Probably
you don't need to worry about it.

Andrei


random bits?  I am fully aware that these low order bits are numerical 
fuzz and are meaningless from a practical perspective.  I am only 
concerned because I thought these bits are supposed to be deterministic 
even if they're meaningless.  Now that I've ruled out a bug in 
std.parallelism, I'm wondering if it's a bug in druntime or DMD.


Re: Floating Point + Threads?

2011-04-16 Thread Andrei Alexandrescu

On 4/16/11 9:52 AM, dsimcha wrote:

Output:
Rounding mode: 0
0.7853986633972191094
Rounding mode: 0
0.7853986633972437348


I think at this precision the difference may be in random bits. Probably 
you don't need to worry about it.


Andrei


Re: Floating Point + Threads?

2011-04-16 Thread Iain Buclaw
== Quote from dsimcha (dsim...@yahoo.com)'s article
 On 4/16/2011 10:11 AM, dsimcha wrote:
 Output:
 Rounding mode:  0
 0.7853986633972191094
 Rounding mode:  0
 0.7853986633972437348

This is not something I can replicate on my workstation.


Re: Floating Point + Threads?

2011-04-16 Thread dsimcha

On 4/16/2011 11:16 AM, Iain Buclaw wrote:

== Quote from dsimcha (dsim...@yahoo.com)'s article

On 4/16/2011 10:11 AM, dsimcha wrote:
Output:
Rounding mode:  0
0.7853986633972191094
Rounding mode:  0
0.7853986633972437348


This is not something I can replicate on my workstation.


Interesting.  Since I know you're a Linux user, I fired up my Ubuntu VM 
and tried out my test case.  I can't reproduce it either on Linux, only 
on Windows.


Re: compile phobos into 64bit -- error!

2011-04-16 Thread David Wang

== Forward by Andrei Alexandrescu (seewebsiteforem...@erdani.org)
== Posted at 2011/04/16 10:43 to digitalmars.D

On 4/16/11 9:44 AM, David Wang wrote:
 Hi, all,
[snip]

What operating system?

Andrei


Hi, Andrei,
I'm using Fedora 14 x86_64.


Best regards.
David.


Re: Floating Point + Threads?

2011-04-16 Thread JimBob

dsimcha dsim...@yahoo.com wrote in message 
news:iocalv$2h58$1...@digitalmars.com...
 Output:
 Rounding mode:  0
 0.7853986633972191094
 Rounding mode:  0
 0.7853986633972437348

Could be something somewhere is getting truncated from real to double, which 
would mean 12 fewer bits of mantisa. Maybe the FPU is set to lower precision 
in one of the threads? 




Re: Floating Point + Threads?

2011-04-16 Thread Andrei Alexandrescu

On 4/16/11 9:59 AM, dsimcha wrote:

On 4/16/2011 10:55 AM, Andrei Alexandrescu wrote:

On 4/16/11 9:52 AM, dsimcha wrote:

Output:
Rounding mode: 0
0.7853986633972191094
Rounding mode: 0
0.7853986633972437348


I think at this precision the difference may be in random bits. Probably
you don't need to worry about it.

Andrei


random bits? I am fully aware that these low order bits are numerical
fuzz and are meaningless from a practical perspective. I am only
concerned because I thought these bits are supposed to be deterministic
even if they're meaningless. Now that I've ruled out a bug in
std.parallelism, I'm wondering if it's a bug in druntime or DMD.


I seem to remember that essentially some of the last bits printed in 
such a result are essentially arbitrary. I forgot what could cause this.


Andrei


Re: Temporarily disable all purity for debug prints

2011-04-16 Thread Walter Bright

On 4/16/2011 6:41 AM, bearophile wrote:

Yes, it allows one to break the purity of the function. The alternative is
to use casts (which also breaks the purity) or another compiler switch
(which also breaks the purity).


A compiler switch to disable purity doesn't break purity. It just turns all
pure functions inside the program (or inside the compilation unit) into not
pure ones. At this point you are allowed to call writeln too, safely.



Saying it is a safe way to break purity assumes that there was no purpose to the 
purity. There is no guaranteed safe way to break purity, with or without a 
compiler switch.


Re: Floating Point + Threads?

2011-04-16 Thread Walter Bright

On 4/16/2011 6:46 AM, Iain Buclaw wrote:

== Quote from Walter Bright (newshou...@digitalmars.com)'s article

That's a good thought. FP addition results can differ dramatically depending on
associativity.


And not to forget optimisations too. ;)


The dmd optimizer is careful not to reorder evaluation in such a way as to 
change the results.


Re: Floating Point + Threads?

2011-04-16 Thread Timon Gehr

 Could be something somewhere is getting truncated from real to double, which
 would mean 12 fewer bits of mantisa. Maybe the FPU is set to lower precision
 in one of the threads?

Yes indeed, this is a _Windows_ bug.
I have experienced this in Windows before, the main thread's FPU state register 
is
initialized to lower FPU-Precision (64bits) by default by the OS, presumably to
make FP calculations faster. However, when you start a new thread, the FPU will
use the whole 80 bits for computation because, curiously, the FPU is not
reconfigured for those.
Suggested fix: Add

asm{fninit};

to the beginning of your main function, and the difference between the two will 
be
gone.

This would be a compatibility issue DMD/windows which disables the real data
type. You might want to file a bug report for druntime if my suggested fix 
works.
(This would imply that the real type was basically identical to the double type 
in
Windows all along!)


Re: Floating Point + Threads?

2011-04-16 Thread dsimcha

On 4/16/2011 2:15 PM, Timon Gehr wrote:



Could be something somewhere is getting truncated from real to double, which
would mean 12 fewer bits of mantisa. Maybe the FPU is set to lower precision
in one of the threads?


Yes indeed, this is a _Windows_ bug.
I have experienced this in Windows before, the main thread's FPU state register 
is
initialized to lower FPU-Precision (64bits) by default by the OS, presumably to
make FP calculations faster. However, when you start a new thread, the FPU will
use the whole 80 bits for computation because, curiously, the FPU is not
reconfigured for those.
Suggested fix: Add

asm{fninit};

to the beginning of your main function, and the difference between the two will 
be
gone.

This would be a compatibility issue DMD/windows which disables the real data
type. You might want to file a bug report for druntime if my suggested fix 
works.
(This would imply that the real type was basically identical to the double type 
in
Windows all along!)


Close:  If I add this instruction to the function for the new thread, 
the difference goes away.  The relevant statement is:


auto t = new Thread( {
asm { fninit; }
res2 = sumRange(terms);
} );

At any rate, this is a **huge** WTF that should probably be fixed in 
druntime.  Once I understand it a little better, I'll file a bug report.


Re: std.parallelism: Review resumes now

2011-04-16 Thread Alix Pexton

On 13/04/2011 15:28, Lars T. Kyllingstad wrote:

We now resume the formal review of David Simcha's std.parallelism
module.  Code and documentation can be found here:

 https://github.com/dsimcha/std.parallelism/blob/master/parallelism.d
 http://cis.jhu.edu/~dsimcha/d/phobos/std_parallelism.html

Please post reviews and comments in this thread.

Voting for inclusion in Phobos will start on April 19 and end on April
26.  Please do not cast any votes in this thread.  I will start a new
thread for that purpose when the voting period begins.

-Lars


I'll be away all next week with, and I doubt I'll have internet, so I'd 
like to cast my vote in advance ^^


I vote *for* inclusion!

Good Luck David ^^

A...


std.parallelism: Naming?

2011-04-16 Thread dsimcha
I'm reconsidering the naming of std.parallelism.  The name is catchy, 
but perhaps too general.  std.parallelism currently targets SMP 
parallelism.  In the future it would be nice for Phobos to target SIMD 
parallelism and distributed message passing parallelism, too.  These 
might belong in different modules.  Then again, std.smp or std.multicore 
or something just doesn't sound as catchy.  SIMD would probably just be 
array ops and stuff.  Distributed message passing would probably be 
absorbed by std.concurrency since the distinction between concurrency 
and parallelism isn't as obvious at this level and std.concurrency is 
already the home of message passing stuff.  Please comment.


Re: Floating Point + Threads?

2011-04-16 Thread Iain Buclaw
== Quote from Walter Bright (newshou...@digitalmars.com)'s article
 On 4/16/2011 6:46 AM, Iain Buclaw wrote:
  == Quote from Walter Bright (newshou...@digitalmars.com)'s article
  That's a good thought. FP addition results can differ dramatically 
  depending on
  associativity.
 
  And not to forget optimisations too. ;)
 The dmd optimizer is careful not to reorder evaluation in such a way as to
 change the results.

And so it rightly shouldn't!

I was thinking more of a case of FPU precision rather than ordering: as in you 
get
a different result computing on SSE in double precision mode on the one hand, 
and
by computing on x87 in double precision then writing to a double variable in 
memory.


Classic example (which could either be a bug or non-bug depending on your POV):

void test(double x, double y)
{
  double y2 = x + 1.0;
  assert(y == y2);   // triggers with -O
}

void main()
{
  double x = .012;
  double y = x + 1.0;
  test(x, y);
}



Re: Floating Point + Threads?

2011-04-16 Thread dsimcha

On 4/16/2011 2:24 PM, dsimcha wrote:

On 4/16/2011 2:15 PM, Timon Gehr wrote:



Could be something somewhere is getting truncated from real to
double, which
would mean 12 fewer bits of mantisa. Maybe the FPU is set to lower
precision
in one of the threads?


Yes indeed, this is a _Windows_ bug.
I have experienced this in Windows before, the main thread's FPU state
register is
initialized to lower FPU-Precision (64bits) by default by the OS,
presumably to
make FP calculations faster. However, when you start a new thread, the
FPU will
use the whole 80 bits for computation because, curiously, the FPU is not
reconfigured for those.
Suggested fix: Add

asm{fninit};

to the beginning of your main function, and the difference between the
two will be
gone.

This would be a compatibility issue DMD/windows which disables the
real data
type. You might want to file a bug report for druntime if my suggested
fix works.
(This would imply that the real type was basically identical to the
double type in
Windows all along!)


Close: If I add this instruction to the function for the new thread, the
difference goes away. The relevant statement is:

auto t = new Thread( {
asm { fninit; }
res2 = sumRange(terms);
} );

At any rate, this is a **huge** WTF that should probably be fixed in
druntime. Once I understand it a little better, I'll file a bug report.


Read up a little on what fninit does, etc.  This is IMHO a druntime bug. 
 Filed as http://d.puremagic.com/issues/show_bug.cgi?id=5847 .


Re: Floating Point + Threads?

2011-04-16 Thread Walter Bright

On 4/16/2011 9:52 AM, Andrei Alexandrescu wrote:

I seem to remember that essentially some of the last bits printed in such a
result are essentially arbitrary. I forgot what could cause this.


To see what the exact bits are, print using %A format.

In any case, floating point bits are not random. They are completely 
deterministic.


Re: Floating Point + Threads?

2011-04-16 Thread Walter Bright

On 4/16/2011 11:43 AM, Iain Buclaw wrote:

I was thinking more of a case of FPU precision rather than ordering: as in you 
get
a different result computing on SSE in double precision mode on the one hand, 
and
by computing on x87 in double precision then writing to a double variable in 
memory.


You're right on that one.


Re: Temporarily disable all purity for debug prints

2011-04-16 Thread bearophile
Walter:

 Saying it is a safe way to break purity assumes that there was no purpose to 
 the purity.
 There is no guaranteed safe way to break purity, with or without a compiler 
 switch.

The compiler switch I am talking about doesn't break purity. Its purpose is 
similar to removing the pure attributes from the source code. And doing it is 
usually safe, if you do it in the whole program.

If you take a D2 program, you remove all its pure attributes and you compile it 
again, the result is generally a program just as correct as before. The 
compiler just doesn't perform the optimizations that purity allows: a function 
that's now not pure gets computed each time you call it, it doesn't use the 
pure GC Don was talking about, the (future) conditionally pure higher order 
functions just return a not pure result, it keeps not using global mutable 
state because you have not modified the source code in any other way. 

If you recompile all the program parts that use the functions that are now not 
pure, the only things that break are static constraints (or static asserts) 
that require a function pointer to be pure, and similar things, but I have not 
used such things so far, they are probably quite uncommon.

An example, I have a little buggy strongly pure function sqr that's pure. The 
smart compiler moves the call to sqr out of the foreach loop because sqr is 
pure and x doesn't change inside the loop:

import std.stdio: writeln;

pure int sqr(const int x) {
  int result = x * x * x; // bug
  version (mydebug) writeln(result);
  return result;
}

void main() {
  int x = 10;
  int total;
  foreach (i; 0 .. 10)
total += sqr(x);
  writeln(total);
}

To debug sqr I use the -nopure switch, its purpose is just to virtually comment 
out all the pure annotations. I also compile with -version=mydebug (I have 
avoided debug{} itself just to avoid confusion):


import std.stdio: writeln;

/*pure*/ int sqr(const int x) {
  int result = x * x * x; // bug
  version (mydebug) writeln(result);
  return result;
}

void main() {
  int x = 10;
  int total;
  foreach (i; 0 .. 10)
total += sqr(x);
  writeln(total);
}

Now the writeln() inside sqr works and its purity is not broken, it's just 
absent, and the compiler just calls sqr ten times because sqr is not pure any 
more.


On the other hand if you have a program like this:

import std.stdio, std.traits;

pure int sqr(const int x) {
  int result = x * x * x; // bug
  version (mydebug) writeln(result);
  return result;
}

auto pureApply(TF, T)(TF f, T x)
if (functionAttributes!f  FunctionAttribute.PURE) {
return f(x);
}

void main() {
  int x = 10;
  writeln(pureApply(sqr, x));
}

If you compile this program with -nopure it breaks, because pureApply() 
requires f to be pure. I think this kind of code is uncommon.

Bye,
bearophile


Re: Floating Point + Threads?

2011-04-16 Thread Sean Kelly

On Apr 16, 2011, at 11:43 AM, dsimcha wrote:
 
 Close: If I add this instruction to the function for the new thread, the
 difference goes away. The relevant statement is:
 
 auto t = new Thread( {
 asm { fninit; }
 res2 = sumRange(terms);
 } );
 
 At any rate, this is a **huge** WTF that should probably be fixed in
 druntime. Once I understand it a little better, I'll file a bug report.
 
 Read up a little on what fninit does, etc.  This is IMHO a druntime bug.  
 Filed as http://d.puremagic.com/issues/show_bug.cgi?id=5847 .

Really a Windows bug that should be fixed in druntime :-)  I know I'm splitting 
hairs.  This will be fixed for the next release.



Re: std.parallelism: Naming?

2011-04-16 Thread Andrej Mitrovic
std.multicore? :p


Re: std.parallelism: Naming?

2011-04-16 Thread Andrei Alexandrescu

On 4/16/11 1:39 PM, dsimcha wrote:

I'm reconsidering the naming of std.parallelism. The name is catchy, but
perhaps too general. std.parallelism currently targets SMP parallelism.
In the future it would be nice for Phobos to target SIMD parallelism and
distributed message passing parallelism, too. These might belong in
different modules. Then again, std.smp or std.multicore or something
just doesn't sound as catchy. SIMD would probably just be array ops and
stuff. Distributed message passing would probably be absorbed by
std.concurrency since the distinction between concurrency and
parallelism isn't as obvious at this level and std.concurrency is
already the home of message passing stuff. Please comment.


I don't mind std.parallelism one bit.

Andrei


Re: Temporarily disable all purity for debug prints

2011-04-16 Thread Walter Bright

On 4/16/2011 11:52 AM, bearophile wrote:

Walter:


Saying it is a safe way to break purity assumes that there was no purpose
to the purity. There is no guaranteed safe way to break purity, with or
without a compiler switch.


The compiler switch I am talking about doesn't break purity. Its purpose is
similar to removing the pure attributes from the source code. And doing it
is usually safe, if you do it in the whole program.



No, it is not. You seem to be thinking that purity is just a bug finding or 
optimization feature. That is not so. Purity is a guarantee that can be relied 
upon for a program's behavior. Breaking purity breaks that guarantee.


(Think multithreaded programs, for example.)




If you take a D2 program, you remove all its pure attributes and you compile
it again, the result is generally a program just as correct as before.


generally is not a verifiable characteristic. When we talk about safety, we're 
talking about a verifiable guarantee.


Re: Floating Point + Threads?

2011-04-16 Thread Walter Bright

On 4/16/2011 11:51 AM, Sean Kelly wrote:


On Apr 16, 2011, at 11:43 AM, dsimcha wrote:


Close: If I add this instruction to the function for the new thread, the
difference goes away. The relevant statement is:

auto t = new Thread( { asm { fninit; } res2 = sumRange(terms); } );

At any rate, this is a **huge** WTF that should probably be fixed in
druntime. Once I understand it a little better, I'll file a bug report.


Read up a little on what fninit does, etc.  This is IMHO a druntime bug.
Filed as http://d.puremagic.com/issues/show_bug.cgi?id=5847 .


Really a Windows bug that should be fixed in druntime :-)  I know I'm
splitting hairs.  This will be fixed for the next release.



The dmd startup code (actually the C startup code) does an fninit. I never 
thought about new thread starts. So, yeah, druntime should do an fninit on 
thread creation.


Re: Floating Point + Threads?

2011-04-16 Thread bearophile
Walter:

 The dmd startup code (actually the C startup code) does an fninit. I never 
 thought about new thread starts. So, yeah, druntime should do an fninit on 
 thread creation.

My congratulations to all the (mostly two) people involved in finding this bug 
and its causes :-)
I'd like to see this module in Phobos.

Bye,
bearophile


Re: Floating Point + Threads?

2011-04-16 Thread Robert Jacques
On Sat, 16 Apr 2011 15:32:12 -0400, Walter Bright  
newshou...@digitalmars.com wrote:



On 4/16/2011 11:51 AM, Sean Kelly wrote:


On Apr 16, 2011, at 11:43 AM, dsimcha wrote:


Close: If I add this instruction to the function for the new thread,  
the

difference goes away. The relevant statement is:

auto t = new Thread( { asm { fninit; } res2 = sumRange(terms); } );

At any rate, this is a **huge** WTF that should probably be fixed in
druntime. Once I understand it a little better, I'll file a bug  
report.


Read up a little on what fninit does, etc.  This is IMHO a druntime  
bug.

Filed as http://d.puremagic.com/issues/show_bug.cgi?id=5847 .


Really a Windows bug that should be fixed in druntime :-)  I know I'm
splitting hairs.  This will be fixed for the next release.



The dmd startup code (actually the C startup code) does an fninit. I  
never thought about new thread starts. So, yeah, druntime should do an  
fninit on thread creation.


The documentation I've found on fninit seems to indicate it defaults to  
64-bit precision, which means that by default we aren't seeing the benefit  
of D's reals. I'd much prefer 80-bit precision by default.


Re: std.parallelism: Naming?

2011-04-16 Thread Dmitry Olshansky

On 16.04.2011 22:39, dsimcha wrote:
I'm reconsidering the naming of std.parallelism.  The name is catchy, 
but perhaps too general.  std.parallelism currently targets SMP 
parallelism.  In the future it would be nice for Phobos to target SIMD 
parallelism and distributed message passing parallelism, too.  These 
might belong in different modules.  Then again, std.smp or 
std.multicore or something just doesn't sound as catchy.  SIMD would 
probably just be array ops and stuff.  Distributed message passing 
would probably be absorbed by std.concurrency since the distinction 
between concurrency and parallelism isn't as obvious at this level and 
std.concurrency is already the home of message passing stuff.  Please 
comment.


I'm inclined to go with std.parallelism, the name is so cute :).
On the serious side of it, I think SIMDs  really belong to compiler 
internals and std.intrinsics.
And any message passing should most likely go into std.concurency, even 
though that lives some scenarios somewhat on the edge of two (parallelism).



--
Dmitry Olshansky



Re: Temporarily disable all purity for debug prints

2011-04-16 Thread bearophile
Walter:

 No, it is not. You seem to be thinking that purity is just a bug finding or
 optimization feature. That is not so. Purity is a guarantee that can be relied
 upon for a program's behavior. Breaking purity breaks that guarantee.
 
 (Think multithreaded programs, for example.)

I was not thinking about multi-threaded programs, you are right.

But I think just commenting out the pure attributes doesn't turn a correct 
multi-threaded program into an incorrect one.


 If you take a D2 program, you remove all its pure attributes and you compile
 it again, the result is generally a program just as correct as before.

 generally is not a verifiable characteristic. When we talk about safety, 
 we're
 talking about a verifiable guarantee.

Your solution too (of allowing impure code inside debug{} is breaking the 
guarantees.

With generally correct I meant to say that if you use the -nopure switch then 
the program is as correct as before, but in some uncommon cases it doesn't 
compile, because it doesn't satisfy some static requirements of purity that it 
contains. A not compiled program is safe :-) So while my solution was not 
perfect, I think your solution is less safe than mine.

My purpose was to present a debugging problem I have had, to suggest one 
solution, and to try to explain why my solution isn't breaking safety (unlike 
yours). I think you have now understood my ideas. You have more programming 
experience than me, the design decision is yours. Thank you for giving one 
solution to the original debug problem, and for your answers in this thread.

Bye,
bearophile


Re: std.parallelism: Naming?

2011-04-16 Thread Michel Fortin

On 2011-04-16 14:39:23 -0400, dsimcha dsim...@yahoo.com said:

I'm reconsidering the naming of std.parallelism.  The name is catchy, 
but perhaps too general.  std.parallelism currently targets SMP 
parallelism.  In the future it would be nice for Phobos to target SIMD 
parallelism and distributed message passing parallelism, too.  These 
might belong in different modules.  Then again, std.smp or 
std.multicore or something just doesn't sound as catchy.  SIMD would 
probably just be array ops and stuff.  Distributed message passing 
would probably be absorbed by std.concurrency since the distinction 
between concurrency and parallelism isn't as obvious at this level and 
std.concurrency is already the home of message passing stuff.  Please 
comment.


While parallelism might be too general, isn't it true that it's too 
specific at the same time? I mean, the module includes a concurrent 
task system, some sugar to parallelize loops using tasks (foreach, map, 
reduce), and an async buffer implementation also based on tasks. Of 
those, which are truly parallelism?



--
Michel Fortin
michel.for...@michelf.com
http://michelf.com/



Re: RFC: Units of measurement for D (Phobos?)

2011-04-16 Thread David Nadlinger

On 4/12/11 6:40 PM, David Nadlinger wrote:

- The helper functions for creating instances of new unit types (scale,
affine, ...) are currently template functions taking an instance of the
unit they manipulate as actual argument. This is only for »historical«
reasons really, would it be cleaner to use just templates?


I just went ahead and changed them to pure templates, having the unit 
helpers as functions only to infer the type of the passed unit instance 
really made no sense in the current design (which uses template alias 
parameters to pass unit instances heavily anyway).


Another missing thing I didn't mention in the original post is support 
for named derived units – currently, there is no way you could enable 
(kilogram * metre / pow!2(second) to be printed as »Newton«. It wouldn't 
be hard to implement, but I didn't really feel the need for it so far…


David



Re: GC for pure functions -- implementation ideas

2011-04-16 Thread Timon Gehr
 Yeah, I never formalized it at all, but that's roughly what TempAlloc
 accomplishes.  My other concern is, what happens in the case of the
 following code:

 uint nonLeaky() pure {
  foreach(i; 0..42) {
   auto arr = new uint[666];
   // do stuff
  }

  return 8675309;
 }

 In this case the arr instance from every loop iteration is retained
 until nonLeaky() returns, whether it's referenced or not.  Granted, this
 is a silly example, but I imagine there are cases where stuff like this
 happens in practice.

It should be trivial to just deallocate when arr goes out of scope.

This would be more complicated to resolve, as the analogy to stack allocation
vanishes:

uint nonLeaky() pure {
 uint[] arr1 = new uint[666];
 uint[] arr2 = new uint[666];
 foreach(i; 0..42) {
  arr1 = new uint[66*i];
  // do stuff
 }

 return 8675309;
}

The problem is, that inside a non-leaky pure function the general case for 
dynamic
allocations might be just as complicated as in other parts of the program.
However, the problem does only exist when the pure function deletes/overrides 
its
own references. Those are the only ones it is allowed to modify. Therefore, the
compiler could just use the GC heap whenever a reference is assigned twice or 
more
times inside a non-leaky pure function? I think it might be a better option than
letting the pure heap fill up with garbage.


Re: Temporarily disable all purity for debug prints

2011-04-16 Thread Walter Bright

On 4/16/2011 1:21 PM, bearophile wrote:

Walter:


No, it is not. You seem to be thinking that purity is just a bug finding
or optimization feature. That is not so. Purity is a guarantee that can be
relied upon for a program's behavior. Breaking purity breaks that
guarantee.

(Think multithreaded programs, for example.)


I was not thinking about multi-threaded programs, you are right.

But I think just commenting out the pure attributes doesn't turn a correct
multi-threaded program into an incorrect one.


When you combine that with allowing impure code, then yes it does.



If you take a D2 program, you remove all its pure attributes and you
compile it again, the result is generally a program just as correct as
before.



generally is not a verifiable characteristic. When we talk about safety,
we're talking about a verifiable guarantee.


Your solution too (of allowing impure code inside debug{} is breaking the
guarantees.


Yes. It leaves the onus of correctness on the user rather than the compiler. 
That's what is required when doing, say, a printf.





My purpose was to present a debugging problem I have had, to suggest one
solution, and to try to explain why my solution isn't breaking safety (unlike
yours).


Any solution either breaks safety or is useless. A printf call breaks purity. 
There is no way around that. A compiler switch cannot change that.


Re: GC for pure functions -- implementation ideas

2011-04-16 Thread bearophile
Timon Gehr:

 The problem is, that inside a non-leaky pure function the general case for 
 dynamic
 allocations might be just as complicated as in other parts of the program.

If this is true, then you need a sealed temporary heap.

Bye,
bearophile


Re: A use case for fromStringz

2011-04-16 Thread spir

On 04/16/2011 06:55 AM, Andrej Mitrovic wrote:

I wonder.. in all these years.. have they ever thought about using a
convention in C where the length is embedded as a 32/64bit value at
the pointed location of a pointer, followed by the array contents?


Sometimes called Pascal strings (actually, IIRC, the length is at the address 
/before/ the one pointed by the pointer). One of the important diffs between C 
 Pascal from the practical pov.

Actually, it's the same diff as C arrays vs true arrays like D's.

Denis
--
_
vita es estrany
spir.wikidot.com



Vector operations doesn't convert to a common type?

2011-04-16 Thread simendsjo

int[3]   a = [1,2,4];
float[3] b = [1,2,4];
float[3] c;
// Why doesn't this work?
c = a[] + b[]; // Error: incompatible types for ((a[]) + (b[])): 
'int[]' and 'float[]'

// When this works?
c[0] = a[0] + b[0];
c[1] = a[1] + b[1];
c[2] = a[2] + b[2];
assert(c == [2,4,8]);


Re: Vector operations doesn't convert to a common type?

2011-04-16 Thread simendsjo

On 16.04.2011 12:12, simendsjo wrote:

int[3] a = [1,2,4];
float[3] b = [1,2,4];
float[3] c;
// Why doesn't this work?
c = a[] + b[]; // Error: incompatible types for ((a[]) + (b[])): 'int[]'
and 'float[]'
// When this works?
c[0] = a[0] + b[0];
c[1] = a[1] + b[1];
c[2] = a[2] + b[2];
assert(c == [2,4,8]);


I tried using a template mixin to avoid having to type this all over, 
but I cannot get it to work.. I think I have something wrong with my 
alias usage, but that's just speculating :)


mixin template ArrayVectorOp(string op, alias dest, alias a, alias b, int I)
if(dest.length == a.length  dest.length == b.length  I = 1  I = 
dest.length) {

// dest[I-1] = a[I-1] op b[I-1]
mixin(dest.stringof~[~(I-1).stringof~]
   = 
  ~a.stringof~[~(I-1).stringof~]
  ~op
  ~b.stringof~[~(I-1).stringof~];);

static if(I  1)
mixin ArrayVectorOp!(op, dest, a, b, I-1);
}

void main() {
int[3]   a = [1,   2,   3];
float[3] b = [2.1, 3.2, 4.3];
float[3] c;
mixin ArrayVectorOp!(+, c, a, b, a.length);
assert(c == [3.1, 5.2, 7.3]);
}


Gives the following output:

t.d(4): no identifier for declarator c[3 - 1]
t.d(4): Error: c is used as a type
t.d(4): Error: cannot implicitly convert expression (cast(float)a[2u] + 
b[2u]) o

f type float to const(_error_[])
t.d(4): no identifier for declarator c[2 - 1]
t.d(4): Error: c is used as a type
t.d(4): Error: cannot implicitly convert expression (cast(float)a[1u] + 
b[1u]) o

f type float to const(_error_[])
t.d(4): no identifier for declarator c[1 - 1]
t.d(4): Error: c is used as a type
t.d(4): Error: cannot implicitly convert expression (cast(float)a[0u] + 
b[0u]) o

f type float to const(_error_[])
t.d(11): Error: mixin 
t.main.ArrayVectorOp!(+,c,a,b,3u).ArrayVectorOp!(op,c,a,

b,2).ArrayVectorOp!(op,c,a,b,1) error instantiating
t.d(11): Error: mixin 
t.main.ArrayVectorOp!(+,c,a,b,3u).ArrayVectorOp!(op,c,a,

b,2) error instantiating
t.d(18): Error: mixin t.main.ArrayVectorOp!(+,c,a,b,3u) error 
instantiating


auto arr = new int[10];

2011-04-16 Thread %u
is there any different b/w:
auto arr = new int[10];
and
int[10] arr;
?


Re: auto arr = new int[10];

2011-04-16 Thread Piotr Szturmaj

%u wrote:

is there any different b/w:
auto arr = new int[10];


arr is dynamic array of int with ten elements


and
int[10] arr;
?


arr is static array of int with ten elements



Re: Vector operations doesn't convert to a common type?

2011-04-16 Thread bearophile
simendsjo:

  int[3]   a = [1,2,4];
  float[3] b = [1,2,4];
  float[3] c;
  // Why doesn't this work?
  c = a[] + b[]; // Error: incompatible types for ((a[]) + (b[])): 
 'int[]' and 'float[]'

  // When this works?
  c[0] = a[0] + b[0];
  c[1] = a[1] + b[1];
  c[2] = a[2] + b[2];
  assert(c == [2,4,8]);

Vector ops are often implemented in assembly, and despite they are less 
flexible, they sometimes lead to more efficiency (if the arrays are large). The 
second example uses normal D code, that's much more flexible.

Bye,
bearophile


Re: A use case for fromStringz

2011-04-16 Thread Andrej Mitrovic
Yeah I basically took the idea from the existing D implementation.
Although D's arrays are a struct with a length and a pointer (I think
so).


accessing your scope for template metaprogramming ?

2011-04-16 Thread Sean Cavanaugh
Is there any way to access the your current scope?  For instance the 
following pseudocode is what I am after more or less:



class Foo
{
  alias byte function(int) BarCompatibleInterface;

  byte Bar(int)
  {
static assert(is(typeof(localscope) == function));

static assert(is(std.traits.ReturnType!(localscope) == 
std.traits.ReturnType!(BarCompatibleInterface));


static assert(is(std.traits.ParameterTypeTuple!(localscope) == 
std.traits.ParameterTypeTuple!(BarCompatibleInterface));


static assert(is(typeof(localscope.outer) == class));

static assert(is(typeof(localscope.outer.outer) == module));
  }
}


My desire for this is to help the compiler generate better error 
messages from compilation of string mixins, as they could access the 
function and/or class they are in and give a better error message from 
within a static assert() construct.


Re: accessing your scope for template metaprogramming ?

2011-04-16 Thread bearophile
Sean Cavanaugh:

 Is there any way to access the your current scope?  For instance the 
 following pseudocode is what I am after more or less:
 
 
 class Foo
 {
alias byte function(int) BarCompatibleInterface;
 
byte Bar(int)
{
  static assert(is(typeof(localscope) == function));

You have just invented another purpose for something like a _function_, that 
refers to the current function that I have originally desired to solve this 
different problem.

See my Comments 1 and 2 here:
http://d.puremagic.com/issues/show_bug.cgi?id=5140

Bye,
bearophile


[Issue 5678] new enum struct re-allocated at compile time

2011-04-16 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=5678



--- Comment #2 from bearophile_h...@eml.cc 2011-04-16 06:46:24 PDT ---
Is the same problem with associative arrays in another bug report?

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 5846] New: String literals can be assigned to static char arrays without .dup

2011-04-16 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=5846

   Summary: String literals can be assigned to static char arrays
without .dup
   Product: D
   Version: D2
  Platform: Other
OS/Version: Windows
Status: NEW
  Keywords: accepts-invalid
  Severity: normal
  Priority: P2
 Component: DMD
AssignedTo: nob...@puremagic.com
ReportedBy: andrej.mitrov...@gmail.com


--- Comment #0 from Andrej Mitrovic andrej.mitrov...@gmail.com 2011-04-16 
10:36:37 PDT ---
Isn't the following invalid code?

char[3] value = abc;

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 5847] New: Threads started by core.thread should have same floating point state as main thread

2011-04-16 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=5847

   Summary: Threads started by core.thread should have same
floating point state as main thread
   Product: D
   Version: unspecified
  Platform: Other
OS/Version: Windows
Status: NEW
  Severity: normal
  Priority: P2
 Component: druntime
AssignedTo: nob...@puremagic.com
ReportedBy: dsim...@yahoo.com


--- Comment #0 from David Simcha dsim...@yahoo.com 2011-04-16 11:28:58 PDT ---
The following example code runs the same floating point function in two threads
(though not concurrently).  The answer produced by each thread is different in
the low order bits on Windows:

import std.algorithm, core.thread, std.stdio, core.stdc.fenv;

real sumRange(const(real)[] range) {
writeln(Rounding mode:  , fegetround);  // 0 from both threads.
return reduce!a + b(range);
}

void main() {
immutable n = 1_000_000;
immutable delta = 1.0 / n;

auto terms = new real[1_000_000];
foreach(i, ref term; terms) {
immutable x = ( i - 0.5 ) * delta;
term = delta / ( 1.0 + x * x ) * 1;
}

immutable res1 = sumRange(terms);
writefln(%.19f, res1);

real res2;
auto t = new Thread( { res2 = sumRange(terms); } );
t.start();
t.join();
writefln(%.19f, res2);
}


Output:
Rounding mode:  0
0.7853986633972191094
Rounding mode:  0
0.7853986633972437348 

If I change the new Thread statement to the following:

auto t = new Thread( {
asm { fninit; }
res2 = sumRange(terms);
} );


then both threads print the same answer.  This needs fixing because, when
performing floating point operations in parallel, it can lead to results that
are non-deterministic and depend on how the work is scheduled.

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 5848] New: Coverage always report 0000000 for inlined function

2011-04-16 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=5848

   Summary: Coverage always report 000 for inlined function
   Product: D
   Version: D2
  Platform: Other
OS/Version: Mac OS X
Status: NEW
  Severity: normal
  Priority: P2
 Component: DMD
AssignedTo: nob...@puremagic.com
ReportedBy: kenn...@gmail.com


--- Comment #0 from kenn...@gmail.com 2011-04-16 11:39:31 PDT ---
When a function is inlined, the coverage result will not consider it was
called, and always report the coverage count as 000.

For example, the program:

int inlined(int p, int q) {
return p+q;
}
void main() {
inlined(1, 3);
}

without -inline, the coverage result is

   |int inlined(int p, int q) {
  1|return p+q;
   |}
   |void main() {
  1|inlined(1, 3);
   |}
x.d is 100% covered

with -inline, the 'inlined' function becomes uncovered

   |int inlined(int p, int q) {
000|return p+q;
   |}
   |void main() {
  1|inlined(1, 3);
   |}
x.d is 50% covered

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 5848] Coverage always report 0000000 for inlined function

2011-04-16 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=5848


bearophile_h...@eml.cc changed:

   What|Removed |Added

 CC||bearophile_h...@eml.cc


--- Comment #1 from bearophile_h...@eml.cc 2011-04-16 11:50:22 PDT ---
What kind of textual output do you desire in this situation?

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 5848] Coverage always report 0000000 for inlined function

2011-04-16 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=5848



--- Comment #2 from kenn...@gmail.com 2011-04-16 12:06:15 PDT ---
(In reply to comment #1)
 What kind of textual output do you desire in this situation?

What do you mean?

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 5848] Coverage always report 0000000 for inlined function

2011-04-16 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=5848



--- Comment #3 from bearophile_h...@eml.cc 2011-04-16 12:54:22 PDT ---
(In reply to comment #2)
 (In reply to comment #1)
  What kind of textual output do you desire in this situation?
 
 What do you mean?

What coverage results textual file do you want DMD to save on disk about that
inlined() function when you compile the program with the -inline switch too?

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 4851] Three suggestions for std.random

2011-04-16 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=4851



--- Comment #2 from bearophile_h...@eml.cc 2011-04-16 12:54:38 PDT ---
That fourth idea is also useful to avoid a little trap. This code looks
correct, here randomCover() is used to act like the Python random.choice(), but
here it keeps taking the same value:

import std.stdio, std.random;
void main() {
// can't be const
/*const*/ int[] data = [1, 2, 3, 4];
foreach (i; 0 .. 20) {
int r = randomCover(data, rndGen).front;
write(r,  );
}
}

Output:
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1


The same bug can't happen with code like this, because the random generator is
not created inside the foreach scope:

import std.stdio, std.random;
void main() {
// can't be const
/*const*/ int[] data = [1, 2, 3, 4];
foreach (i; 0 .. 20) {
int r = randomCover(data).front;
// int r = choice(data); // better
write(r,  );
}
}

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 5140] Add __FUNCTION__

2011-04-16 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=5140



--- Comment #2 from bearophile_h...@eml.cc 2011-04-16 15:36:18 PDT ---
See a better explanations and some examples of __function:
http://rosettacode.org/wiki/Anonymous_recursion

And I see this is another use case for __function:

http://www.digitalmars.com/webnews/newsgroups.php?art_group=digitalmars.D.learnarticle_id=26404

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 5849] New: std.random.dice is better as a range

2011-04-16 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=5849

   Summary: std.random.dice is better as a range
   Product: D
   Version: D2
  Platform: All
OS/Version: All
Status: NEW
  Severity: enhancement
  Priority: P2
 Component: Phobos
AssignedTo: nob...@puremagic.com
ReportedBy: bearophile_h...@eml.cc


--- Comment #0 from bearophile_h...@eml.cc 2011-04-16 18:06:59 PDT ---
When I have to use std.random.dice the frequencies don't change often, but I
usually have to compute many random values efficiently. Calling dice() many
times is not efficient, because it performs some preprocessing of the given
frequencies.

So I suggest to replace the function dice(xxx) with a range that generates the
results.

So the current usage of:
dice(70, 20, 10)
gets replaced by:
dice(70, 20, 10).front

And you are able to write:

take(dice(70, 20, 10), 5)

The good thing of this generator is its performance compared to the function.
See the performance difference between the two following implementations (5.3
seconds for the first version and 0.7 seconds for the second one).



import std.stdio, std.random, std.string;

void main() {
  enum int N = 10_000_000;
  enum pr = [1/5., 1/6., 1/7., 1/8., 1/9., 1/10., 1/11., 1759/27720.];

  double[pr.length] counts = 0.0;
  foreach (i; 0 .. N)
counts[dice(pr)]++;

  foreach (i, p; pr)
writefln(%.8f   %.8f, p, counts[i] / N);
}



import std.stdio, std.random, std.string;

void main() {
  enum int N = 10_000_000;
  enum pr = [1/5., 1/6., 1/7., 1/8., 1/9., 1/10., 1/11., 1759/27720.];
  double[pr.length] cumulatives = pr[];
  foreach (i, ref c; cumulatives[1 .. $-1])
c += cumulatives[i];
  cumulatives[$-1] = 1.0;

  double[pr.length] counts = 0.0;
  auto rnd = Xorshift(unpredictableSeed());
  foreach (i; 0 .. N) {
double rnum = rnd.front() / cast(double)typeof(rnd.front()).max;
rnd.popFront();
int j;
for ( ; rnum  cumulatives[j]; j++) {}
counts[j]++;
  }

  foreach (i, p; pr)
writefln(%.8f   %.8f, p, counts[i] / N);
}

-

See also some other improvements I have suggested for std.random.

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 5850] New: Default arguments of out and ref arguments

2011-04-16 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=5850

   Summary: Default arguments of out and ref arguments
   Product: D
   Version: D2
  Platform: Other
OS/Version: Windows
Status: NEW
  Keywords: wrong-code
  Severity: normal
  Priority: P2
 Component: DMD
AssignedTo: nob...@puremagic.com
ReportedBy: bearophile_h...@eml.cc


--- Comment #0 from bearophile_h...@eml.cc 2011-04-16 18:11:29 PDT ---
This D2 program compiles with no errors and runs raising no assert errors:

void foo(out int x=1, ref int y=2) {}
void main() {
int x, y;
foo(x, y);
assert(x == 0  y == 0);
}


If default arguments for out and ref arguments can't be made to work, then I
suggest to disallow them statically.

In Ada (2012) A default_expression is only allowed in a
parameter_specification for a formal parameter of mode in. See point 19 here:
http://www.ada-auth.org/standards/12rm/html/RM-6-1.html

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 1389] Can't use mixin expressions when start of a statement.

2011-04-16 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=1389


Kenji Hara k.hara...@gmail.com changed:

   What|Removed |Added

   Keywords||patch
 CC||k.hara...@gmail.com


--- Comment #1 from Kenji Hara k.hara...@gmail.com 2011-04-16 19:29:03 PDT ---
Patch posted.

https://github.com/D-Programming-Language/dmd/pull/31

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 5771] Template constructor and auto ref do not work

2011-04-16 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=5771


Kenji Hara k.hara...@gmail.com changed:

   What|Removed |Added

   Keywords||patch, rejects-valid


--- Comment #1 from Kenji Hara k.hara...@gmail.com 2011-04-16 19:30:15 PDT ---
Patch posted.

https://github.com/D-Programming-Language/dmd/pull/30

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 5657] Temporary object destruction

2011-04-16 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=5657



--- Comment #6 from Kenji Hara k.hara...@gmail.com 2011-04-16 19:31:55 PDT ---
Pull request.

https://github.com/D-Programming-Language/dmd/pull/26

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 5678] new enum struct re-allocated at compile time

2011-04-16 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=5678



--- Comment #3 from Don clugd...@yahoo.com.au 2011-04-16 22:26:16 PDT ---
(In reply to comment #2)
 Is the same problem with associative arrays in another bug report?

The general bug, which includes AAs, was fixed with this commit:

https://github.com/donc/dmd/commit/fc67046cf1e66182d959309fb15ef9e2d4c266b9

The only thing which was unique to this one is the use of 'new'.

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---