Re: DConf Day 1 Panel with Walter Bright and Andrei Alexandrescu

2014-06-28 Thread Sönke Ludwig via Digitalmars-d-announce

Am 28.06.2014 05:33, schrieb Peter Alexander:

On Saturday, 28 June 2014 at 02:46:25 UTC, safety0ff wrote:

On Saturday, 28 June 2014 at 02:02:28 UTC, Peter Alexander

int a;
const int b;
immutable int c;
foo(a);
foo(b);
foo(c);

These all call foo!int


Awesome, thanks!


... I just tried this and I'm wrong. The qualifier isn't stripped. Gah!
Three different versions!

I could have sworn D did this for primitive types. This makes me sad :-(


I *think* it does this if you define foo as foo(T)(const(T) arg), though.


Re: DConf Day 1 Panel with Walter Bright and Andrei Alexandrescu

2014-06-28 Thread safety0ff via Digitalmars-d-announce

On Saturday, 28 June 2014 at 16:51:56 UTC, Sönke Ludwig wrote:

Am 28.06.2014 05:33, schrieb Peter Alexander:

On Saturday, 28 June 2014 at 02:46:25 UTC, safety0ff wrote:

On Saturday, 28 June 2014 at 02:02:28 UTC, Peter Alexander

int a;
const int b;
immutable int c;
foo(a);
foo(b);
foo(c);

These all call foo!int


Awesome, thanks!


... I just tried this and I'm wrong. The qualifier isn't 
stripped. Gah!

Three different versions!

I could have sworn D did this for primitive types. This makes 
me sad :-(


I *think* it does this if you define foo as foo(T)(const(T) 
arg), though.


Thanks, that works.
std.math doesn't do this for its templated functions, should it?

Is there an easy way to shared-strip primitive types?
Perhaps passing non-ref/non-pointer primitive data to const(T) 
should implicitly strip shared.

Reading of the shared data occurs at the call site.
Are there any use cases where passing on the shared-ness of a 
primitive type to non-ref const(T) is useful?


Re: DConf Day 1 Panel with Walter Bright and Andrei Alexandrescu

2014-06-28 Thread Meta via Digitalmars-d-announce
On Friday, 27 June 2014 at 15:31:17 UTC, Andrei Alexandrescu 
wrote:

http://www.reddit.com/r/programming/comments/298vtt/dconf_2014_panel_with_walter_bright_and_andrei/

https://twitter.com/D_Programming/status/482546357690187776

https://news.ycombinator.com/newest

https://www.facebook.com/dlang.org/posts/874091959271153


Andrei


Tuple destructuring syntax, straight from the horse's mouth =)

However, you said that you want destructuring for 
std.typecons.Tuple. Does this extend to built-in tuples? Looking 
at Kenji's DIP[1], it seems the proposed syntax is for built-in 
tuples.


Re: DConf Day 1 Panel with Walter Bright and Andrei Alexandrescu

2014-06-28 Thread Meta via Digitalmars-d-announce

On Sunday, 29 June 2014 at 02:07:48 UTC, Meta wrote:
On Friday, 27 June 2014 at 15:31:17 UTC, Andrei Alexandrescu 
wrote:

http://www.reddit.com/r/programming/comments/298vtt/dconf_2014_panel_with_walter_bright_and_andrei/

https://twitter.com/D_Programming/status/482546357690187776

https://news.ycombinator.com/newest

https://www.facebook.com/dlang.org/posts/874091959271153


Andrei


Tuple destructuring syntax, straight from the horse's mouth =)

However, you said that you want destructuring for 
std.typecons.Tuple. Does this extend to built-in tuples? 
Looking at Kenji's DIP[1], it seems the proposed syntax is for 
built-in tuples.


Whoops, link to the DIP: http://wiki.dlang.org/DIP32


Re: std.math performance (SSE vs. real)

2014-06-28 Thread Walter Bright via Digitalmars-d

On 6/27/2014 10:18 PM, Walter Bright wrote:

On 6/27/2014 4:10 AM, John Colvin wrote:

*The number of algorithms that are both numerically stable/correct and benefit
significantly from  64bit doubles is very small.


To be blunt, baloney. I ran into these problems ALL THE TIME when doing
professional numerical work.



Sorry for being so abrupt. FP is important to me - it's not just about 
performance, it's also about accuracy.


Re: Module level variable shadowing

2014-06-28 Thread dennis luehring via Digitalmars-d

Am 28.06.2014 07:11, schrieb H. S. Teoh via Digitalmars-d:

On Sat, Jun 28, 2014 at 06:37:08AM +0200, dennis luehring via Digitalmars-d 
wrote:

Am 27.06.2014 20:09, schrieb Kapps:

[...]

struct Foo {
   int a;
   this(int a) {
   this.a = a;
   }
}


forgot that case - but i don't like how its currently handled, maybe
no better way - its just not perfect :)


Actually, this particular use case is very bad. It's just inviting
typos, for example, if you mistyped int a as int s, then you get:

struct Foo {
int a;
this(int s) {
this.a = a; // oops, now it means this.a = this.a
}
}

I used to like this shadowing trick, until one day I got bit by this
typo. From then on, I acquired a distaste for this kind of shadowing.
Not to mention, typos are only the beginning of troubles. If you copy a
few lines from the ctor into another method (e.g., to partially reset
the object state), then you end up with a similar unexpected rebinding
to this.a, etc..

Similar problems exist in nested functions:

auto myFunc(A...)(A args) {
int x;
int helperFunc(B...)(B args) {
int x = 1;
return x + args.length;
}
}

Accidentally mistype B args or int x=1, and again you get a silent
bug. This kind of shadowing is just a minefield of silent bugs waiting
to happen.

No thanks!


T



thx for the examples - never though of these problems

i personaly would just forbid any shadowing and single-self-assign
and then having unique names (i use m_ for members and p_ for parameters 
etc.) or give a compile error asking for this.x or .x (maybe problematic 
with inner structs/functions)


but that could be a problem for C/C++ code porting - but is that such a 
big problem?





[OT] The ART runtime presentation at Google I/O 2014

2014-06-28 Thread Paulo Pinto via Digitalmars-d

Hi,

posting this talk here, as Java performance on Android is often 
mentioned in the discussions about GC vs RC performance.


So here you have the Android  team explaining:

- Dalvik GC sucks and is a basic stop the world implementation with 
pauses  10ms


- ART solves the GC issue by adopting multiple concurrent GC algorithms,
depending on the application state thus reducing the stop down to 3ms. 
More optimizations still to be done until the final release


- JIT compiler optimized for battery life and low power processors, thus 
only very hot methods get actually compiled to native code


- Performance sorted out, by doing AOT compilation on install, similar 
to approach taken by OS/400, Windows Phone 8


https://www.youtube.com/watch?v=EBlTzQsUoOw



--
Paulo




Re: Pair literal for D language

2014-06-28 Thread Jacob Carlborg via Digitalmars-d

On 2014-06-26 23:16, H. S. Teoh via Digitalmars-d wrote:


Note that the word tuple in D is used to refer to two very different
things. What you want is essentially an anonymous struct, and should be
adequately covered by std.typecons.tuple.


I would like some form of anonymous struct, with some kind of pattern 
matching on the fields [1].


[1] http://forum.dlang.org/thread/kfbnuc$1cro$1...@digitalmars.com

--
/Jacob Carlborg


Android ART, some very intersting lesson we can use for D (including GC discussions)

2014-06-28 Thread deadalnix via Digitalmars-d

https://www.youtube.com/watch?v=EBlTzQsUoOw

Surprised by what the actual android GC does (it is kind of 
crappy in fact) and what they do with ART, which is actually 
quite smart !


Re: Android ART, some very intersting lesson we can use for D (including GC discussions)

2014-06-28 Thread Paulo Pinto via Digitalmars-d

Am 28.06.2014 09:26, schrieb deadalnix:

https://www.youtube.com/watch?v=EBlTzQsUoOw

Surprised by what the actual android GC does (it is kind of crappy in
fact) and what they do with ART, which is actually quite smart !


I guess you missed my post :)

 http://forum.dlang.org/thread/loln0t$qu2$1...@digitalmars.com


Yes it is what I keep on telling on the recurring GC vs ARC discussions,
every time Objective-C on iOS gets compared with Java on Android.

Dalvik sucks and Google hasn't invested that much effort into it since 
around 2.3 got released, yet people look into their Android device and 
judge Java on mobile devices by Dalvik's lack of quality.



--
Paulo


Re: std.math performance (SSE vs. real)

2014-06-28 Thread John Colvin via Digitalmars-d

On Saturday, 28 June 2014 at 06:16:51 UTC, Walter Bright wrote:

On 6/27/2014 10:18 PM, Walter Bright wrote:

On 6/27/2014 4:10 AM, John Colvin wrote:
*The number of algorithms that are both numerically 
stable/correct and benefit

significantly from  64bit doubles is very small.


To be blunt, baloney. I ran into these problems ALL THE TIME 
when doing

professional numerical work.



Sorry for being so abrupt. FP is important to me - it's not 
just about performance, it's also about accuracy.


I still maintain that the need for the precision of 80bit reals 
is a niche demand. Its a very important niche, but it doesn't 
justify having its relatively extreme requirements be the 
default. Someone writing a matrix inversion has only themselves 
to blame if they don't know plenty of numerical analysis and look 
very carefully at the specifications of all operations they are 
using.


Paying the cost of moving to/from the fpu, missing out on 
increasingly large SIMD units, these make everyone pay the price.


inclusion of the 'real' type in D was a great idea, but std.math 
should be overloaded for float/double/real so people have the 
choice where they stand on the performance/precision front.


Re: Pair literal for D language

2014-06-28 Thread Dicebot via Digitalmars-d

On Friday, 27 June 2014 at 22:01:21 UTC, Mason McGill wrote:
I like DIP54 and I think the work on fixing tuples is awesome, 
but I have 1 nit-picky question: why is it called 
TemplateArgumentList when it's not always used as template 
arguments?


  void func(string, string) { }

  TypeTuple!(string, string) var;
  var[0] = I'm nobody's ;
  var[1] = template argument!;
  f(var);

Why not a name that emphasizes the entity's semantics, like 
StaticList/ExpandingList/StaticTuple/ExpandingTuple?


Because it is defined by template argument list and has exactly 
the same semantics as one. And semantics are unique and obscure 
enough that no other name can express it precisely.


'StaticList' is what you may have wanted it to be but not what it 
is.


Re: Module level variable shadowing

2014-06-28 Thread Jacob Carlborg via Digitalmars-d

On 2014-06-28 08:19, dennis luehring wrote:


thx for the examples - never though of these problems

i personaly would just forbid any shadowing and single-self-assign
and then having unique names (i use m_ for members and p_ for parameters
etc.) or give a compile error asking for this.x or .x (maybe problematic
with inner structs/functions)


I think, in general, if you need to prefix/suffix any symbols name, 
there's something wrong with the language.


--
/Jacob Carlborg


Re: std.math performance (SSE vs. real)

2014-06-28 Thread francesco cattoglio via Digitalmars-d

On Saturday, 28 June 2014 at 09:07:17 UTC, John Colvin wrote:

On Saturday, 28 June 2014 at 06:16:51 UTC, Walter Bright wrote:

On 6/27/2014 10:18 PM, Walter Bright wrote:

On 6/27/2014 4:10 AM, John Colvin wrote:
*The number of algorithms that are both numerically 
stable/correct and benefit

significantly from  64bit doubles is very small.


To be blunt, baloney. I ran into these problems ALL THE TIME 
when doing

professional numerical work.

Sorry for being so abrupt. FP is important to me - it's not 
just about performance, it's also about accuracy.


When you need accuracy, 999 times out of 1000 you change the 
numerical technique, you don't just blindly upgrade the precision.
The only real reason one would use 80 bits is when there is an 
actual need of adding values which differ for more than 16 orders 
of magnitude. And I've never seen this happen in any numerical 
paper I've read.


I still maintain that the need for the precision of 80bit reals 
is a niche demand. Its a very important niche, but it doesn't 
justify having its relatively extreme requirements be the 
default. Someone writing a matrix inversion has only themselves 
to blame if they don't know plenty of numerical analysis and 
look very carefully at the specifications of all operations 
they are using.


Couldn't agree more. 80 bit IS a niche, which is really nice to 
have, but shouldn't be the standard if we lose on performance.


Paying the cost of moving to/from the fpu, missing out on 
increasingly large SIMD units, these make everyone pay the 
price.


Especially the numerical analysts themselves will pay that price. 
64 bit HAS to be as fast as possible, if you want to be 
competitive when it comes to any kind of numerical work.


Re: std.math performance (SSE vs. real)

2014-06-28 Thread Russel Winder via Digitalmars-d
On Sat, 2014-06-28 at 09:07 +, John Colvin via Digitalmars-d wrote:
[…]
 I still maintain that the need for the precision of 80bit reals 
 is a niche demand. Its a very important niche, but it doesn't 
 justify having its relatively extreme requirements be the 
 default. Someone writing a matrix inversion has only themselves 
 to blame if they don't know plenty of numerical analysis and look 
 very carefully at the specifications of all operations they are 
 using.

I fear the whole argument is getting misguided. We should reset.

If you are doing numerical calculations then accuracy is critical.
Arbitrary precision floats are the only real (!) way of doing any
numeric non-integer calculation, and arbitrary precision integers are
the only way of doing integer calculations.

However speed is also an issue, so to obtain speed we have hardware
integer and floating point ALUs.

The cost for the integer ALU is bounded integers. Python appreciates
this and uses hardware integers when it can and software integers
otherwise. Thus Python is very good for doing integer work. C, C++, Go,
D, Fortran, etc. are fundamentally crap for integer calculation because
integers are bounded. Of course if calculations are prvably within the
hardware integer bounds this is not a constraint and we are happy with
hardware integers. Just don't try calculating factorial, Fibonacci
numbers and other numbers used in some bioinformatics and quant models.
There is a reason why SciPy has a massive following in bioinformatics
and quant comuting.

The cost for floating point ALU is accuracy. Hardware floating point
numbers are dreadful in that sense, but again the issue is speed and for
GPU they went 32-bit for speed. Now they are going 64-bit as they can
just about get the same speed and the accuracy is so much greater. For
hardware floating point the more bits you have the better. Hence IBM in
the 360 and later having 128-bit floating point for accuracy at the
expense of some speed. Sun had 128-bit in the SPARC processors for
accuracy at the expense of a little speed.

As Walter has or will tell us, C (and thus C++) got things woefully
wrong in support of numerical work because the inventors were focused on
writing operating systems, supporting only PDP hardware. They and the
folks that then wrote various algorithms didn't really get numerical
analysis. If C had targeted IBM 360 from the outset things might have
been better.

We have to be clear on this: Fortran is the only language that supports
hardware floating types even at all well.

Intel's 80-bit floating point were an aberration, they should just have
done 128-bit in the first place. OK so they got the 80-bit stuff as a
sort of free side-effect of creating 64-bit, but they ran with.  They
shouldn't have done. I cannot see it ever happening again. cf. ARM.

By being focused on Intel chips, D has failed to get floating point
correct in avery analogous way to C failing to get floating point types
right by focusing on PDP. Yes using 80-bit on Intel is good, but no-one
else has this. Floating point sizes should be 32-, 64-, 128-, 256-bit,
etc. D needs to be able to handle this. So does C, C++, Java, etc. Go
will be able to handle it when it is ported to appropriate hardware as
they use float32, float64, etc. as their types. None of this float,
double, long double, double double rubbish.

So D should perhaps make a breaking change and have types int32, int64,
float32, float64, float80, and get away from the vagaries of bizarre
type relationships with hardware? 

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Re: std.math performance (SSE vs. real)

2014-06-28 Thread Walter Bright via Digitalmars-d

On 6/28/2014 2:47 AM, francesco cattoglio wrote:

When you need accuracy, 999 times out of 1000 you change the numerical
technique, you don't just blindly upgrade the precision.


I have experience doing numerical work? Upgrading the precision is the first 
thing people try.




The only real reason one would use 80 bits is when there is an actual need of
adding values which differ for more than 16 orders of magnitude. And I've never
seen this happen in any numerical paper I've read.


It happens with both numerical integration and inverting matrices. Inverting 
matrices is commonplace for solving N equations with N unknowns.


Errors accumulate very rapidly and easily overwhelm the significance of the 
answer.



Especially the numerical analysts themselves will pay that price. 64 bit HAS to
be as fast as possible, if you want to be competitive when it comes to any kind
of numerical work.


Getting the wrong answer quickly is not useful when you're calculating the 
stress levels in a part.


Again, I've done numerical programming in airframe design. The correct answer is 
what matters. You can accept wrong answers in graphics display algorithms, but 
not when designing critical parts.


Re: std.math performance (SSE vs. real)

2014-06-28 Thread Russel Winder via Digitalmars-d
On Sat, 2014-06-28 at 03:42 -0700, Walter Bright via Digitalmars-d
wrote:
 On 6/28/2014 2:47 AM, francesco cattoglio wrote:
  When you need accuracy, 999 times out of 1000 you change the numerical
  technique, you don't just blindly upgrade the precision.
 
 I have experience doing numerical work? Upgrading the precision is the first 
 thing people try.

Nonetheless, algorithm and expression of algorithm are often more
important. As proven by my Pi_Quadrature examples you can appear to have
better results with greater precision, but actually the way the code
operates is actually the core problem: the code I have written does not
do things in the best way to achieve the best result as a given accuracy
level..

[…]
 
 Errors accumulate very rapidly and easily overwhelm the significance of the 
 answer.

I wonder if programmers should only be allowed to use floating point
number sin their code if they have studied numerical analysis?
 
  Especially the numerical analysts themselves will pay that price. 64 bit 
  HAS to
  be as fast as possible, if you want to be competitive when it comes to any 
  kind
  of numerical work.
 
 Getting the wrong answer quickly is not useful when you're calculating the 
 stress levels in a part.

[…]
 Again, I've done numerical programming in airframe design. The correct answer 
 is 
 what matters. You can accept wrong answers in graphics display algorithms, 
 but 
 not when designing critical parts.

Or indeed when calculating anything to do with money.

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Re: std.math performance (SSE vs. real)

2014-06-28 Thread Russel Winder via Digitalmars-d
On Fri, 2014-06-27 at 15:04 +0200, dennis luehring via Digitalmars-d
wrote:
 Am 27.06.2014 14:20, schrieb Russel Winder via Digitalmars-d:
  On Fri, 2014-06-27 at 11:10 +, John Colvin via Digitalmars-d wrote:
  [
]
  I understand why the current situation exists. In 2000 x87 was
  the standard and the 80bit precision came for free.
 
  Real programmers have been using 128-bit floating point for decades. All
  this namby-pamby 80-bit stuff is just an aberration and should never
  have happened.
 
 what consumer hardware and compiler supports 128-bit floating points?

None but what has that do do with the core problem being debated?

The core problem here is that no programming language has a proper type
system able to deal with hardware. C has a hack, Fortran as a less
problematic hack. Go insists on float32, float64, etc. which is better
but still not great.

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Re: std.math performance (SSE vs. real)

2014-06-28 Thread Russel Winder via Digitalmars-d
On Fri, 2014-06-27 at 13:11 +, John Colvin via Digitalmars-d wrote:
 On Friday, 27 June 2014 at 13:04:31 UTC, dennis luehring wrote:
  Am 27.06.2014 14:20, schrieb Russel Winder via Digitalmars-d:
  On Fri, 2014-06-27 at 11:10 +, John Colvin via 
  Digitalmars-d wrote:
  [
]
  I understand why the current situation exists. In 2000 x87 was
  the standard and the 80bit precision came for free.
 
  Real programmers have been using 128-bit floating point for 
  decades. All
  this namby-pamby 80-bit stuff is just an aberration and should 
  never
  have happened.
 
  what consumer hardware and compiler supports 128-bit floating 
  points?
 
 I think he was joking :)

Actually no, but…

 No consumer hardware supports IEEE binary128 as far as I know. 
 Wikipedia suggests that Sparc used to have some support.

For once Wikipedia is not wrong. IBM 128-bit is not IEEE compliant (but
pre-dates IEEE standards). SPARC is IEEE compliant. No other hardware
manufacturer appears to care about accuracy of floating point expression
evaluation. GPU manufacturers have an excuse of sorts in that speed is
more important than accuracy for graphics model evaluation. GPGPU
suffers because of this.

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Re: std.math performance (SSE vs. real)

2014-06-28 Thread francesco cattoglio via Digitalmars-d

On Saturday, 28 June 2014 at 10:42:19 UTC, Walter Bright wrote:

On 6/28/2014 2:47 AM, francesco cattoglio wrote:
I have experience doing numerical work? Upgrading the precision 
is the first thing people try.




Brute force is always the first thing people try :o)

It happens with both numerical integration and inverting 
matrices. Inverting matrices is commonplace for solving N 
equations with N unknowns.
Errors accumulate very rapidly and easily overwhelm the 
significance of the answer.


And that's exactly the reason you change approach instead of 
getting greater precision: the adding precision approach scales 
horribly, at least in my field of study, which is solving 
numerical PDEs.

(BTW: no sane person inverts matrices)

Getting the wrong answer quickly is not useful when you're 
calculating the stress levels in a part.


We are talking about paying a price when you don't need it. With 
the correct approach, solving numerical problems with double 
precision floats yelds perfectly fine results. And it is, in 
fact, commonplace.


Again, I've not read yet a research paper in which it was clearly 
stated that 64bit floats were not good enough for solving a whole 
class of PDE problem. I'm not saying that real is useless, quite 
the opposite: I love the idea of having an extra tool when the 
need arises. I think the focus should be about not paying a price 
for what you don't use


Re: std.math performance (SSE vs. real)

2014-06-28 Thread Iain Buclaw via Digitalmars-d
On 27 June 2014 10:37, hane via Digitalmars-d
digitalmars-d@puremagic.com wrote:
 On Friday, 27 June 2014 at 06:48:44 UTC, Iain Buclaw via Digitalmars-d
 wrote:

 On 27 June 2014 07:14, Iain Buclaw ibuc...@gdcproject.org wrote:

 On 27 June 2014 02:31, David Nadlinger via Digitalmars-d
 digitalmars-d@puremagic.com wrote:

 Hi all,

 right now, the use of std.math over core.stdc.math can cause a huge
 performance problem in typical floating point graphics code. An instance
 of
 this has recently been discussed here in the Perlin noise benchmark
 speed
 thread [1], where even LDC, which already beat DMD by a factor of two,
 generated code more than twice as slow as that by Clang and GCC. Here,
 the
 use of floor() causes trouble. [2]

 Besides the somewhat slow pure D implementations in std.math, the
 biggest
 problem is the fact that std.math almost exclusively uses reals in its
 API.
 When working with single- or double-precision floating point numbers,
 this
 is not only more data to shuffle around than necessary, but on x86_64
 requires the caller to transfer the arguments from the SSE registers
 onto
 the x87 stack and then convert the result back again. Needless to say,
 this
 is a serious performance hazard. In fact, this accounts for an 1.9x
 slowdown
 in the above benchmark with LDC.

 Because of this, I propose to add float and double overloads (at the
 very
 least the double ones) for all of the commonly used functions in
 std.math.
 This is unlikely to break much code, but:
  a) Somebody could rely on the fact that the calls effectively widen the
 calculation to 80 bits on x86 when using type deduction.
  b) Additional overloads make e.g. floor ambiguous without context,
 of
 course.

 What do you think?

 Cheers,
 David


 This is the reason why floor is slow, it has an array copy operation.

 ---
   auto vu = *cast(ushort[real.sizeof/2]*)(x);
 ---

 I didn't like it at the time I wrote, but at least it prevented the
 compiler (gdc) from removing all bit operations that followed.

 If there is an alternative to the above, then I'd imagine that would
 speed up floor by tenfold.


 Can you test with this?

 https://github.com/D-Programming-Language/phobos/pull/2274

 Float and Double implementations of floor/ceil are trivial and I can add
 later.


 Nice! I tested with the Perlin noise benchmark, and it got faster(in my
 environment, 1.030s - 0.848s).
 But floor still consumes almost half of the execution time.


I've done some further improvements in that PR.  I'd imagine you'd see
a little more juice squeezed out.


Re: Module level variable shadowing

2014-06-28 Thread dennis luehring via Digitalmars-d

Am 28.06.2014 11:30, schrieb Jacob Carlborg:

On 2014-06-28 08:19, dennis luehring wrote:


thx for the examples - never though of these problems

i personaly would just forbid any shadowing and single-self-assign
and then having unique names (i use m_ for members and p_ for parameters
etc.) or give a compile error asking for this.x or .x (maybe problematic
with inner structs/functions)


I think, in general, if you need to prefix/suffix any symbols name,
there's something wrong with the language.


i agree 100% - i just try to overcome the shadowing clean with this AND 
have also scope information in the name (i just want to know at every 
palce in code if someing is an parameter)


but i would always prefer a better working method



Re: Send file to printer in D language ( windows )

2014-06-28 Thread via Digitalmars-d

On Friday, 27 June 2014 at 18:17:21 UTC, Alexandre wrote:
I searched the internet, somehow sending documents (txt) to the 
printer, but not found, how can I do to send TXT files to the 
printer using the D language?


A good and simple example programm is redpr.c which is part of 
RedMon.

http://pages.cs.wisc.edu/~ghost/redmon/index.htm
The source of redpr.c can be found  in 
http://pages.cs.wisc.edu/~ghost/gsview/download/redmon19.zip


Cheers
Jürgen


Re: std.math performance (SSE vs. real)

2014-06-28 Thread John Colvin via Digitalmars-d
On Saturday, 28 June 2014 at 10:34:00 UTC, Russel Winder via 
Digitalmars-d wrote:


So D should perhaps make a breaking change and have types 
int32, int64,
float32, float64, float80, and get away from the vagaries of 
bizarre

type relationships with hardware?


`real`* is the only builtin numerical type in D that doesn't have 
a defined width. http://dlang.org/type.html


*well I guess there's size_t and ptrdiff_t, but they aren't 
distinct types in their own right.


Re: Module level variable shadowing

2014-06-28 Thread Ary Borenszweig via Digitalmars-d

On 6/28/14, 6:30 AM, Jacob Carlborg wrote:

On 2014-06-28 08:19, dennis luehring wrote:


thx for the examples - never though of these problems

i personaly would just forbid any shadowing and single-self-assign
and then having unique names (i use m_ for members and p_ for parameters
etc.) or give a compile error asking for this.x or .x (maybe problematic
with inner structs/functions)


I think, in general, if you need to prefix/suffix any symbols name,
there's something wrong with the language.


In Ruby the usage of a variable is always prefixed: `@foo` for instance 
vars, `$foo` for global variable, `FOO` for constant. You can't make a 
mistake. It's... perfect :-)




Re: std.math performance (SSE vs. real)

2014-06-28 Thread Element 126 via Digitalmars-d

On 06/28/2014 12:33 PM, Russel Winder via Digitalmars-d wrote:

On Sat, 2014-06-28 at 09:07 +, John Colvin via Digitalmars-d wrote:
[…]

I still maintain that the need for the precision of 80bit reals
is a niche demand. Its a very important niche, but it doesn't
justify having its relatively extreme requirements be the
default. Someone writing a matrix inversion has only themselves
to blame if they don't know plenty of numerical analysis and look
very carefully at the specifications of all operations they are
using.


I fear the whole argument is getting misguided. We should reset.

If you are doing numerical calculations then accuracy is critical.
Arbitrary precision floats are the only real (!) way of doing any
numeric non-integer calculation, and arbitrary precision integers are
the only way of doing integer calculations.

However speed is also an issue, so to obtain speed we have hardware
integer and floating point ALUs.

The cost for the integer ALU is bounded integers. Python appreciates
this and uses hardware integers when it can and software integers
otherwise. Thus Python is very good for doing integer work. C, C++, Go,
D, Fortran, etc. are fundamentally crap for integer calculation because
integers are bounded. Of course if calculations are prvably within the
hardware integer bounds this is not a constraint and we are happy with
hardware integers. Just don't try calculating factorial, Fibonacci
numbers and other numbers used in some bioinformatics and quant models.
There is a reason why SciPy has a massive following in bioinformatics
and quant comuting.

The cost for floating point ALU is accuracy. Hardware floating point
numbers are dreadful in that sense, but again the issue is speed and for
GPU they went 32-bit for speed. Now they are going 64-bit as they can
just about get the same speed and the accuracy is so much greater. For
hardware floating point the more bits you have the better. Hence IBM in
the 360 and later having 128-bit floating point for accuracy at the
expense of some speed. Sun had 128-bit in the SPARC processors for
accuracy at the expense of a little speed.

As Walter has or will tell us, C (and thus C++) got things woefully
wrong in support of numerical work because the inventors were focused on
writing operating systems, supporting only PDP hardware. They and the
folks that then wrote various algorithms didn't really get numerical
analysis. If C had targeted IBM 360 from the outset things might have
been better.

We have to be clear on this: Fortran is the only language that supports
hardware floating types even at all well.

Intel's 80-bit floating point were an aberration, they should just have
done 128-bit in the first place. OK so they got the 80-bit stuff as a
sort of free side-effect of creating 64-bit, but they ran with.  They
shouldn't have done. I cannot see it ever happening again. cf. ARM.

By being focused on Intel chips, D has failed to get floating point
correct in avery analogous way to C failing to get floating point types
right by focusing on PDP. Yes using 80-bit on Intel is good, but no-one
else has this. Floating point sizes should be 32-, 64-, 128-, 256-bit,
etc. D needs to be able to handle this. So does C, C++, Java, etc. Go
will be able to handle it when it is ported to appropriate hardware as
they use float32, float64, etc. as their types. None of this float,
double, long double, double double rubbish.

So D should perhaps make a breaking change and have types int32, int64,
float32, float64, float80, and get away from the vagaries of bizarre
type relationships with hardware?



+1 for float32  cie. These names are much more explicit than the 
current ones. But I see two problems with it :


 - These names are already used in core.simd to denote vectors, and AVX 
3 (which should appear in mainstream CPUs next year) will require to use 
float16, so the next revision might cause a collision. This could be 
avoided by using real32, real64... instead, but I prefer floatxx since 
it reminds us that we are not dealing with an exact real number.


 - These types are redundant, and people coming from C/C++ will likely 
use float and double instead. It's much too late to think of deprecating 
them since it would break backward compatibility (although it would be 
trivial to update the code with DFix... if someone is still 
maintaining the code).


A workaround would be to use a template which maps to the correct native 
type, iff it has the exact number of bits specified, or issues an error. 
Here is a quick mockup (does not support all types). I used fp instead 
of float or real to avoid name collisions with the current types.


template fp(uint n) {

static if (n == 32) {
alias fp = float;
} else static if (n == 64) {
alias fp = double;
} else static if (n == 80) {
static if (real.mant_dig == 64) {
alias fp = real;
} else {
   

Re: [OT] The ART runtime presentation at Google I/O 2014

2014-06-28 Thread Kagamin via Digitalmars-d
BTW, how GC compares to RC with respect to battery life? All 
those memory scans don't come for free.


Re: std.math performance (SSE vs. real)

2014-06-28 Thread Andrei Alexandrescu via Digitalmars-d

On 6/27/14, 11:16 PM, Walter Bright wrote:

On 6/27/2014 10:18 PM, Walter Bright wrote:

On 6/27/2014 4:10 AM, John Colvin wrote:

*The number of algorithms that are both numerically stable/correct
and benefit
significantly from  64bit doubles is very small.


To be blunt, baloney. I ran into these problems ALL THE TIME when doing
professional numerical work.



Sorry for being so abrupt. FP is important to me - it's not just about
performance, it's also about accuracy.


The only problem is/would be when the language forces one choice over 
the other. Both options of maximum performance and maximum precision 
should be handily accessible to D users.


Andrei



Re: std.math performance (SSE vs. real)

2014-06-28 Thread Andrei Alexandrescu via Digitalmars-d

On 6/28/14, 3:42 AM, Walter Bright wrote:

Inverting matrices is commonplace for solving N equations with N
unknowns.


Actually nobody does that.

Also, one consideration is that the focus of numeric work changes with 
time; nowadays it's all about machine learning, a field that virtually 
didn't exist 20 years ago. In machine learning precision does make a 
difference sometimes, but the key to good ML work is to run many 
iterations over large data sets - i.e., speed.


I have an alarm go off when someone proffers a very strong conviction. 
Very strong convictions means there is no listening to any argument 
right off the bat, which locks out any reasonable discussion before it 
even begins.


For better or worse modern computing units have focused on 32- and 
64-bit float, leaving 80-bit floats neglected. I think it's time to 
accept that simple fact and act on it, instead of claiming we're the 
best in the world at FP math while everybody else speeds by.



Andrei


Re: std.math performance (SSE vs. real)

2014-06-28 Thread John Colvin via Digitalmars-d
On Saturday, 28 June 2014 at 14:01:13 UTC, Andrei Alexandrescu 
wrote:

On 6/28/14, 3:42 AM, Walter Bright wrote:
Inverting matrices is commonplace for solving N equations with 
N

unknowns.


Actually nobody does that.

Also, one consideration is that the focus of numeric work 
changes with time; nowadays it's all about machine learning


It's the most actively publicised frontier, perhaps, but there's 
a huge amount of solid work happening elsewhere. People still 
need better fluid, molecular dynamics etc. simulations, numerical 
PDE solvers, finite element modelling and so on. There's a whole 
world out there :)


That doesn't diminish your main point though.

For better or worse modern computing units have focused on 32- 
and 64-bit float, leaving 80-bit floats neglected. I think it's 
time to accept that simple fact and act on it, instead of 
claiming we're the best in the world at FP math while everybody 
else speeds by.



Andrei


+1


Re: Bounty Increase on Issue #1325927

2014-06-28 Thread Nordlöw
I doubt bounties are effective as a motivation for this kind of 
thing.


If so couldn't you make your code public?


Re: Module level variable shadowing

2014-06-28 Thread dennis luehring via Digitalmars-d

Am 28.06.2014 14:20, schrieb Ary Borenszweig:

On 6/28/14, 6:30 AM, Jacob Carlborg wrote:

On 2014-06-28 08:19, dennis luehring wrote:


thx for the examples - never though of these problems

i personaly would just forbid any shadowing and single-self-assign
and then having unique names (i use m_ for members and p_ for parameters
etc.) or give a compile error asking for this.x or .x (maybe problematic
with inner structs/functions)


I think, in general, if you need to prefix/suffix any symbols name,
there's something wrong with the language.


In Ruby the usage of a variable is always prefixed: `@foo` for instance
vars, `$foo` for global variable, `FOO` for constant. You can't make a
mistake. It's... perfect :-)



i like the ruby-way


Re: Bounty Increase on Issue #1325927

2014-06-28 Thread Iain Buclaw via Digitalmars-d
On 28 June 2014 15:21, Nordlöw digitalmars-d@puremagic.com wrote:
 I doubt bounties are effective as a motivation for this kind of thing.


 If so couldn't you make your code public?

I doubt there is any code to be made public that isn't already. :)



Re: std.math performance (SSE vs. real)

2014-06-28 Thread Alex_Dovhal via Digitalmars-d

On Saturday, 28 June 2014 at 10:42:19 UTC, Walter Bright wrote:
It happens with both numerical integration and inverting 
matrices. Inverting matrices is commonplace for solving N 
equations with N unknowns.


Errors accumulate very rapidly and easily overwhelm the 
significance of the answer.


if one wants better precision with solving linear equation he/she
at least would use QR-decomposition.



Re: [OT] The ART runtime presentation at Google I/O 2014

2014-06-28 Thread Paulo Pinto via Digitalmars-d

Am 28.06.2014 14:42, schrieb Kagamin:

BTW, how GC compares to RC with respect to battery life? All those
memory scans don't come for free.


I don't know, but given the focus to battery life in the upcoming 
Android release I would say it should be good enough.


My S3 with Dalvik can last almost two days with occasional network usage.

My older Android phone, an operator specific model running 2.2, can last 
almost four days.


--
Paulo


Re: Pair literal for D language

2014-06-28 Thread Mason McGill via Digitalmars-d

On Saturday, 28 June 2014 at 09:15:29 UTC, Dicebot wrote:

On Friday, 27 June 2014 at 22:01:21 UTC, Mason McGill wrote:
I like DIP54 and I think the work on fixing tuples is awesome, 
but I have 1 nit-picky question: why is it called 
TemplateArgumentList when it's not always used as template 
arguments?


 void func(string, string) { }

 TypeTuple!(string, string) var;
 var[0] = I'm nobody's ;
 var[1] = template argument!;
 f(var);

Why not a name that emphasizes the entity's semantics, like 
StaticList/ExpandingList/StaticTuple/ExpandingTuple?


Because it is defined by template argument list and has exactly 
the same semantics as one. And semantics are unique and obscure 
enough that no other name can express it precisely.


Understood. I was just expressing my initial impression: that it 
seemed strange that a symbol declared as a `TemplateArgumentList` 
was neither passed nor received as template arguments.


Re: [OT] The ART runtime presentation at Google I/O 2014

2014-06-28 Thread w0rp via Digitalmars-d

On Saturday, 28 June 2014 at 06:23:25 UTC, Paulo Pinto wrote:

Hi,

posting this talk here, as Java performance on Android is often 
mentioned in the discussions about GC vs RC performance.


So here you have the Android  team explaining:

- Dalvik GC sucks and is a basic stop the world implementation 
with pauses  10ms


- ART solves the GC issue by adopting multiple concurrent GC 
algorithms,
depending on the application state thus reducing the stop down 
to 3ms. More optimizations still to be done until the final 
release


- JIT compiler optimized for battery life and low power 
processors, thus only very hot methods get actually compiled to 
native code


- Performance sorted out, by doing AOT compilation on install, 
similar to approach taken by OS/400, Windows Phone 8


https://www.youtube.com/watch?v=EBlTzQsUoOw



--
Paulo


I like his focus on the duration of frames per second. If you are 
aiming for 60FPS, you are aiming for a time period of about 16ms 
to 17ms per frame. So he mentions the current Dalvik GC occupying 
about 10ms for a collection in most cases, leading to a single 
dropped frame, and a near full memory case leading to about 50ms 
and above, leading to a few dropped frames. So when thinking 
about optimising GC, aiming for a collection time small enough to 
fit comfortably inside a frame seems like a good way to think 
about it.


Re: std.math performance (SSE vs. real)

2014-06-28 Thread H. S. Teoh via Digitalmars-d
On Sat, Jun 28, 2014 at 03:31:36PM +, Alex_Dovhal via Digitalmars-d wrote:
 On Saturday, 28 June 2014 at 10:42:19 UTC, Walter Bright wrote:
 It happens with both numerical integration and inverting matrices.
 Inverting matrices is commonplace for solving N equations with N
 unknowns.
 
 Errors accumulate very rapidly and easily overwhelm the significance
 of the answer.
 
 if one wants better precision with solving linear equation he/she at
 least would use QR-decomposition.

Yeah, inverting matrices is generally not the preferred method for
solving linear equations, precisely because of accumulated roundoff
errors. Usually one would use a linear algebra library which has
dedicated algorithms for solving linear systems, which extracts the
solution(s) using more numerically-stable methods than brute-force
matrix inversion. They are also more efficient than inverting the matrix
and then doing a matrix multiplication to get the solution vector.
Mathematically, they are equivalent to matrix inversion, but numerically
they are more stable and not as prone to precision loss issues.

Having said that, though, added precision is always welcome,
particularly when studying mathematical objects (as opposed to more
practical applications like engineering, where 6-8 digits of precision
in the result is generally more than good enough). Of course, the most
ideal implementation would be to use algebraic representations that can
represent quantities exactly, but exact representations are not always
practical (they are too slow for very large inputs, or existing
libraries only support hardware floating-point types, or existing code
requires a lot of effort to support software arbitrary-precision
floats). In such cases, squeezing as much precision out of your hardware
as possible is a good first step towards a solution.


T

-- 
Time flies like an arrow. Fruit flies like a banana.


Re: Pair literal for D language

2014-06-28 Thread deadalnix via Digitalmars-d

On Saturday, 28 June 2014 at 09:15:29 UTC, Dicebot wrote:

On Friday, 27 June 2014 at 22:01:21 UTC, Mason McGill wrote:
I like DIP54 and I think the work on fixing tuples is awesome, 
but I have 1 nit-picky question: why is it called 
TemplateArgumentList when it's not always used as template 
arguments?


 void func(string, string) { }

 TypeTuple!(string, string) var;
 var[0] = I'm nobody's ;
 var[1] = template argument!;
 f(var);

Why not a name that emphasizes the entity's semantics, like 
StaticList/ExpandingList/StaticTuple/ExpandingTuple?


Because it is defined by template argument list and has exactly 
the same semantics as one. And semantics are unique and obscure 
enough that no other name can express it precisely.




You keep repeating that, but people keep being confused. It is 
time to admit defeat.


'StaticList' is what you may have wanted it to be but not what 
it is.


It is whatever we choose it is.


Re: Pair literal for D language

2014-06-28 Thread deadalnix via Digitalmars-d
On Friday, 27 June 2014 at 05:45:19 UTC, H. S. Teoh via 
Digitalmars-d wrote:
We'd make a step forward when we stop calling type tuples type 
tuples.

They are not tuples, and do not contain (only) types.


I agree, but that's what they're called in the compiler source 
code, so

it's kinda hard to call them something else.



http://www.reddit.com/r/programming/comments/298vtt/dconf_2014_panel_with_walter_bright_and_andrei/ciiw5zb

:(


Re: Pair literal for D language

2014-06-28 Thread Dicebot via Digitalmars-d

On Saturday, 28 June 2014 at 19:39:35 UTC, deadalnix wrote:
You keep repeating that, but people keep being confused. It is 
time to admit defeat.


I don't think any name is possible which don't keep someone 
confused - as long as this entity behaves as it does.


'StaticList' is what you may have wanted it to be but not what 
it is.


It is whatever we choose it is.


We can't change existing semantics even slightly, too interleaved 
with many language aspects.


Re: Pair literal for D language

2014-06-28 Thread Timon Gehr via Digitalmars-d

On 06/28/2014 06:11 PM, Mason McGill wrote:

On Saturday, 28 June 2014 at 09:15:29 UTC, Dicebot wrote:

On Friday, 27 June 2014 at 22:01:21 UTC, Mason McGill wrote:

I like DIP54 and I think the work on fixing tuples is awesome, but I
have 1 nit-picky question: why is it called TemplateArgumentList
when it's not always used as template arguments?

 void func(string, string) { }

 TypeTuple!(string, string) var;
 var[0] = I'm nobody's ;
 var[1] = template argument!;
 f(var);

Why not a name that emphasizes the entity's semantics, like
StaticList/ExpandingList/StaticTuple/ExpandingTuple?


Because it is defined by template argument list and has exactly the
same semantics as one. And semantics are unique and obscure enough
that no other name can express it precisely.


Understood. I was just expressing my initial impression: that it seemed
strange that a symbol declared as a `TemplateArgumentList` was neither
passed nor received as template arguments.


That would be strange, but it isn't.

TypeTuple!(string, string) var;
 ^
 passed here


alias TypeTuple(T...)=T; - aliased here
^~~~
received here

Hence:

TypeTuple!(string, string) var;
^~
this is actually the template argument list that was passed

In any case, I just call it 'Seq'.


Re: Pair literal for D language

2014-06-28 Thread Dicebot via Digitalmars-d

On Saturday, 28 June 2014 at 16:11:15 UTC, Mason McGill wrote:
Understood. I was just expressing my initial impression: that 
it seemed strange that a symbol declared as a 
`TemplateArgumentList` was neither passed nor received as 
template arguments.


My hope is that having such surprising name will exactly motivate 
people to figure out what it is before having any false 
expectations and assumptions (like we have right now with all 
those `SomethingTuple`)


Re: Bounty Increase on Issue #1325927

2014-06-28 Thread Dicebot via Digitalmars-d

On Saturday, 28 June 2014 at 14:21:03 UTC, Nordlöw wrote:
I doubt bounties are effective as a motivation for this kind 
of thing.


If so couldn't you make your code public?


Don does not have any code that actually fixes that issue. He as 
done a lot of job on CTFE refactoring and unification making it 
actually possible to _start_ fixing the issue. And all this work 
is now part of DMD master.


Re: Pair literal for D language

2014-06-28 Thread Timon Gehr via Digitalmars-d

On 06/28/2014 09:40 PM, deadalnix wrote:

On Friday, 27 June 2014 at 05:45:19 UTC, H. S. Teoh via Digitalmars-d
wrote:

We'd make a step forward when we stop calling type tuples type tuples.
They are not tuples, and do not contain (only) types.


I agree, but that's what they're called in the compiler source code, so
it's kinda hard to call them something else.



http://www.reddit.com/r/programming/comments/298vtt/dconf_2014_panel_with_walter_bright_and_andrei/ciiw5zb


:(


Well, there are many things that could be considered tuples:

void foo(int x, int y){
//  ^~
}

void main(){
foo(1,2);
// ^
}

:o)


Re: Pair literal for D language

2014-06-28 Thread H. S. Teoh via Digitalmars-d
On Sat, Jun 28, 2014 at 09:15:18PM +, Dicebot via Digitalmars-d wrote:
 On Saturday, 28 June 2014 at 16:11:15 UTC, Mason McGill wrote:
 Understood. I was just expressing my initial impression: that it
 seemed strange that a symbol declared as a `TemplateArgumentList` was
 neither passed nor received as template arguments.
 
 My hope is that having such surprising name will exactly motivate
 people to figure out what it is before having any false expectations
 and assumptions (like we have right now with all those
 `SomethingTuple`)

We've had this discussion before (and not just once), that tuple is a
misnomer, yada yada yada, but until somebody files a PR to change this,
things are just going to continue to remain the same. So I'd say,
whatever name you think is best in place of tuple, just go for it and
file a PR. Then we can bikeshed over the exact name once things get
going.


T

-- 
One reason that few people are aware there are programs running the
internet is that they never crash in any significant way: the free
software underlying the internet is reliable to the point of
invisibility. -- Glyn Moody, from the article Giving it all away


Re: Pair literal for D language

2014-06-28 Thread Dicebot via Digitalmars-d
On Saturday, 28 June 2014 at 21:25:31 UTC, H. S. Teoh via 
Digitalmars-d wrote:
We've had this discussion before (and not just once), that 
tuple is a
misnomer, yada yada yada, but until somebody files a PR to 
change this,
things are just going to continue to remain the same. So I'd 
say,
whatever name you think is best in place of tuple, just go for 
it and
file a PR. Then we can bikeshed over the exact name once things 
get

going.


I know. It will take quite some time to finish though and 
casually using term tuple all the time does not help - pretty 
much anything is better.


Re: Pair literal for D language

2014-06-28 Thread deadalnix via Digitalmars-d

On Saturday, 28 June 2014 at 21:22:14 UTC, Timon Gehr wrote:

On 06/28/2014 09:40 PM, deadalnix wrote:
On Friday, 27 June 2014 at 05:45:19 UTC, H. S. Teoh via 
Digitalmars-d

wrote:
We'd make a step forward when we stop calling type tuples 
type tuples.

They are not tuples, and do not contain (only) types.


I agree, but that's what they're called in the compiler 
source code, so

it's kinda hard to call them something else.



http://www.reddit.com/r/programming/comments/298vtt/dconf_2014_panel_with_walter_bright_and_andrei/ciiw5zb


:(


Well, there are many things that could be considered tuples:

void foo(int x, int y){
//  ^~
}

void main(){
foo(1,2);
// ^
}

:o)


That is certainly a useful language construct (what Dicebot call 
TemplateArgumentList, but as we see, there s no template in this 
sample code), but certainly not what is commonly called a tuple.


Re: Pair literal for D language

2014-06-28 Thread Ary Borenszweig via Digitalmars-d

On 6/28/14, 6:49 PM, deadalnix wrote:

On Saturday, 28 June 2014 at 21:22:14 UTC, Timon Gehr wrote:

On 06/28/2014 09:40 PM, deadalnix wrote:

On Friday, 27 June 2014 at 05:45:19 UTC, H. S. Teoh via Digitalmars-d
wrote:

We'd make a step forward when we stop calling type tuples type tuples.
They are not tuples, and do not contain (only) types.


I agree, but that's what they're called in the compiler source code, so
it's kinda hard to call them something else.



http://www.reddit.com/r/programming/comments/298vtt/dconf_2014_panel_with_walter_bright_and_andrei/ciiw5zb



:(


Well, there are many things that could be considered tuples:

void foo(int x, int y){
//  ^~
}

void main(){
foo(1,2);
// ^
}

:o)


That is certainly a useful language construct (what Dicebot call
TemplateArgumentList, but as we see, there s no template in this sample
code), but certainly not what is commonly called a tuple.


I think it's common: 
http://julia.readthedocs.org/en/latest/manual/types/#tuple-types


Re: Pair literal for D language

2014-06-28 Thread safety0ff via Digitalmars-d

On Saturday, 28 June 2014 at 21:51:17 UTC, Ary Borenszweig wrote:


I think it's common: 
http://julia.readthedocs.org/en/latest/manual/types/#tuple-types


Actually, that section is about normal tuples, there is no 
distinction between normal tuples and type tuples in julia.

From Julia repl:
julia typeof((1,1))
(Int64,Int64)
julia typeof(typeof((1,1)))
(DataType,DataType)
julia typeof((Int64,1))
(DataType,Int64)

So the equivalent to our TypeTuple is a normal tuple containing 
DataType types.


Re: Pair literal for D language

2014-06-28 Thread Timon Gehr via Digitalmars-d

On 06/29/2014 12:42 AM, safety0ff wrote:

On Saturday, 28 June 2014 at 21:51:17 UTC, Ary Borenszweig wrote:


I think it's common:
http://julia.readthedocs.org/en/latest/manual/types/#tuple-types


Actually, that section is about normal tuples, there is no distinction
between normal tuples and type tuples in julia.
 From Julia repl:
julia typeof((1,1))
(Int64,Int64)
julia typeof(typeof((1,1)))
(DataType,DataType)
julia typeof((Int64,1))
(DataType,Int64)

So the equivalent


There's no equivalent. No auto-expansion in Julia. (They do have 
explicit expansion, but an expanded tuple is not an object in its own 
right.)



to our TypeTuple is a normal tuple containing DataType
types.


TypeTuple!(int,string) t;

is basically the same as

int __t_field_0;
string __t_field_1;

alias t=TypeTuple!(__t_field_0,__t_field_1);

I.e. D conflates TypeTuple's of types with types of TypeTuples just as 
Julia conflates types of tuples with tuples of types.


Re: std.math performance (SSE vs. real)

2014-06-28 Thread deadalnix via Digitalmars-d

On Saturday, 28 June 2014 at 09:07:17 UTC, John Colvin wrote:

On Saturday, 28 June 2014 at 06:16:51 UTC, Walter Bright wrote:

On 6/27/2014 10:18 PM, Walter Bright wrote:

On 6/27/2014 4:10 AM, John Colvin wrote:
*The number of algorithms that are both numerically 
stable/correct and benefit

significantly from  64bit doubles is very small.


To be blunt, baloney. I ran into these problems ALL THE TIME 
when doing

professional numerical work.



Sorry for being so abrupt. FP is important to me - it's not 
just about performance, it's also about accuracy.


I still maintain that the need for the precision of 80bit reals 
is a niche demand. Its a very important niche, but it doesn't 
justify having its relatively extreme requirements be the 
default. Someone writing a matrix inversion has only themselves 
to blame if they don't know plenty of numerical analysis and 
look very carefully at the specifications of all operations 
they are using.


Paying the cost of moving to/from the fpu, missing out on 
increasingly large SIMD units, these make everyone pay the 
price.


inclusion of the 'real' type in D was a great idea, but 
std.math should be overloaded for float/double/real so people 
have the choice where they stand on the performance/precision 
front.


Would thar make sense to have std.mast and std.fastmath, or 
something along these lines ?


Re: std.math performance (SSE vs. real)

2014-06-28 Thread Walter Bright via Digitalmars-d

On 6/28/2014 3:57 AM, Russel Winder via Digitalmars-d wrote:

I wonder if programmers should only be allowed to use floating point
number sin their code if they have studied numerical analysis?


Be that as it may, why should a programming language make it harder for them to 
get right than necessary?


The first rule in doing numerical calculations, hammered into me at Caltech, is 
use the max precision available at every step. Rounding error is a major 
problem, and is very underappreciated by engineers until they have a big screwup.


The idea that 64 fp bits ought to be enough for anybody is a pernicious 
disaster, to put it mildly.




Or indeed when calculating anything to do with money.


You're better off using 64 bit longs counting cents to represent money than 
using floating point. But yeah, counting money has its own special problems.


Re: std.math performance (SSE vs. real)

2014-06-28 Thread Walter Bright via Digitalmars-d

On 6/28/2014 4:27 AM, francesco cattoglio wrote:

We are talking about paying a price when you don't need it.


More than that - the suggestion has come up here (and comes up repeatedly) to 
completely remove support for 80 bits. Heck, Microsoft has done so with VC++ and 
even once attempted to completely remove it from 64 bit Windows (I talked them 
out of it, you can thank me!).




With the correct
approach, solving numerical problems with double precision floats yelds
perfectly fine results. And it is, in fact, commonplace.


Presuming your average mechanical engineer is well versed in how to do matrix 
inversion while accounting for precision problems is an absurd pipe dream.


Most engineers only know their math book algorithms, not comp sci best 
practices.

Heck, few CS graduates know how to do it.



Again, I've not read yet a research paper in which it was clearly stated that
64bit floats were not good enough for solving a whole class of PDE problem. I'm
not saying that real is useless, quite the opposite: I love the idea of having
an extra tool when the need arises. I think the focus should be about not paying
a price for what you don't use


I used to work doing numerical analysis on airplane parts. I didn't need a 
research paper to discover how much precision matters and when my results fell 
apart.


Re: std.math performance (SSE vs. real)

2014-06-28 Thread Walter Bright via Digitalmars-d

On 6/28/2014 7:01 AM, Andrei Alexandrescu wrote:

On 6/28/14, 3:42 AM, Walter Bright wrote:

Inverting matrices is commonplace for solving N equations with N
unknowns.


Actually nobody does that.


I did that at Boeing when doing analysis of the movement of the control 
linkages. The traditional way it had been done before was using paper and pencil 
with drafting tools - I showed how it could be done with matrix math.




I have an alarm go off when someone proffers a very strong conviction. Very
strong convictions means there is no listening to any argument right off the
bat, which locks out any reasonable discussion before it even begins.


So far, everyone here has dismissed my experienced out of hand. You too, with 
nobody does that. I don't know how anyone here can make such a statement. How 
many of us have worked in non-programming engineering shops, besides me?




For better or worse modern computing units have focused on 32- and 64-bit float,
leaving 80-bit floats neglected.


Yep, for the game/graphics industry. Modern computing has also produced crappy 
trig functions with popular C compilers, because nobody using C cares about 
accurate answers (or they just assume what they're getting is correct - even worse).




I think it's time to accept that simple fact
and act on it, instead of claiming we're the best in the world at FP math while
everybody else speeds by.


Leaving us with a market opportunity for precision FP.

I note that even the title of this thread says nothing about accuracy, nor did 
the benchmark attempt to assess if there was a difference in results.




Re: std.math performance (SSE vs. real)

2014-06-28 Thread Walter Bright via Digitalmars-d

On 6/28/2014 11:16 AM, H. S. Teoh via Digitalmars-d wrote:

(as opposed to more
practical applications like engineering, where 6-8 digits of precision
in the result is generally more than good enough).


Of the final result, sure, but NOT for the intermediate results. It is an utter 
fallacy to conflate required precision of the result with precision of the 
intermediate results.




Re: std.math performance (SSE vs. real)

2014-06-28 Thread Walter Bright via Digitalmars-d

On 6/28/2014 3:33 AM, Russel Winder via Digitalmars-d wrote:

By being focused on Intel chips, D has failed to get floating point
correct in avery analogous way to C failing to get floating point types
right by focusing on PDP.


Sorry, I do not follow the reasoning here.


Yes using 80-bit on Intel is good, but no-one
else has this. Floating point sizes should be 32-, 64-, 128-, 256-bit,
etc. D needs to be able to handle this. So does C, C++, Java, etc. Go
will be able to handle it when it is ported to appropriate hardware as
they use float32, float64, etc. as their types. None of this float,
double, long double, double double rubbish.

So D should perhaps make a breaking change and have types int32, int64,
float32, float64, float80, and get away from the vagaries of bizarre
type relationships with hardware?


D's spec says that the 'real' type is the max size supported by the FP hardware. 
How is this wrong?




Re: std.math performance (SSE vs. real)

2014-06-28 Thread Walter Bright via Digitalmars-d

On 6/28/2014 5:43 AM, Element 126 wrote:

+1 for float32  cie. These names are much more explicit than the current ones.


I don't see any relevance to this discussion with whether 32 bit floats are 
named 'float' or 'float32'.


Re: std.math performance (SSE vs. real)

2014-06-28 Thread Walter Bright via Digitalmars-d

On 6/28/2014 6:49 AM, Andrei Alexandrescu wrote:

The only problem is/would be when the language forces one choice over the other.
Both options of maximum performance and maximum precision should be handily
accessible to D users.



That's a much more reasonable position than we should abandon 80 bit reals.


Re: std.math performance (SSE vs. real)

2014-06-28 Thread Tofu Ninja via Digitalmars-d

I think this thread is getting out of hand. The main point was to
get float and double overloads for std.math.

This whole discussion about numeric stability, the naming of
double and float, the state of real all of it is a little bit
ridiculous.

Numerical stability is not really related to getting faster
overloads other than the obvious fact that it is a trade off.
Float and double do not need a name change. Real also does not
need a change.

I think this thread needs to refocus on the main point, getting
math overloads for float and double and how to mitigate any
problems that might arise from that.


Re: std.math performance (SSE vs. real)

2014-06-28 Thread Timon Gehr via Digitalmars-d

On 06/29/2014 02:40 AM, Walter Bright wrote:

On 6/28/2014 3:33 AM, Russel Winder via Digitalmars-d wrote:

...
So D should perhaps make a breaking change and have types int32, int64,
float32, float64, float80, and get away from the vagaries of bizarre
type relationships with hardware?


D's spec says that the 'real' type is the max size supported by the FP
hardware. How is this wrong?



It is hardware-dependent.


Re: std.math performance (SSE vs. real)

2014-06-28 Thread Timon Gehr via Digitalmars-d

On 06/29/2014 02:42 AM, Walter Bright wrote:

On 6/28/2014 6:49 AM, Andrei Alexandrescu wrote:

The only problem is/would be when the language forces one choice over
the other.
Both options of maximum performance and maximum precision should be
handily
accessible to D users.



That's a much more reasonable position than we should abandon 80 bit
reals.


If that is what you were arguing against, I don't think this was 
actually suggested.


Re: std.math performance (SSE vs. real)

2014-06-28 Thread Walter Bright via Digitalmars-d

On 6/28/2014 6:14 PM, Timon Gehr wrote:

On 06/29/2014 02:40 AM, Walter Bright wrote:

On 6/28/2014 3:33 AM, Russel Winder via Digitalmars-d wrote:

...
So D should perhaps make a breaking change and have types int32, int64,
float32, float64, float80, and get away from the vagaries of bizarre
type relationships with hardware?


D's spec says that the 'real' type is the max size supported by the FP
hardware. How is this wrong?



It is hardware-dependent.


D does not require real to be 80 bits if the hardware does not support it.

Keep in mind that D is a systems programming language, and that implies you get 
access to the hardware types.


Re: std.math performance (SSE vs. real)

2014-06-28 Thread Andrei Alexandrescu via Digitalmars-d

On 6/28/14, 5:33 PM, Walter Bright wrote:

On 6/28/2014 7:01 AM, Andrei Alexandrescu wrote:

On 6/28/14, 3:42 AM, Walter Bright wrote:

Inverting matrices is commonplace for solving N equations with N
unknowns.


Actually nobody does that.


I did that at Boeing when doing analysis of the movement of the control
linkages. The traditional way it had been done before was using paper
and pencil with drafting tools - I showed how it could be done with
matrix math.


Pen on paper is a low baseline. The classic way to solve linear 
equations with computers is to use Gaussian elimination methods adjusted 
to cancel imprecision. (There are a number of more specialized methods.)


For really large equations with sparse matrices one uses the method of 
relaxations.



I have an alarm go off when someone proffers a very strong conviction.
Very
strong convictions means there is no listening to any argument right
off the
bat, which locks out any reasonable discussion before it even begins.


So far, everyone here has dismissed my experienced out of hand. You too,
with nobody does that. I don't know how anyone here can make such a
statement. How many of us have worked in non-programming engineering
shops, besides me?


My thesis - http://erdani.com/research/dissertation_color.pdf - and some 
of my work at Facebook, which has been patented - 
http://www.faqs.org/patents/app/20140046959 - use large matrix algebra 
intensively.



For better or worse modern computing units have focused on 32- and
64-bit float,
leaving 80-bit floats neglected.


Yep, for the game/graphics industry. Modern computing has also produced
crappy trig functions with popular C compilers, because nobody using C
cares about accurate answers (or they just assume what they're getting
is correct - even worse).



I think it's time to accept that simple fact
and act on it, instead of claiming we're the best in the world at FP
math while
everybody else speeds by.


Leaving us with a market opportunity for precision FP.

I note that even the title of this thread says nothing about accuracy,
nor did the benchmark attempt to assess if there was a difference in
results.


All I'm saying is that our convictions should be informed by, and 
commensurate with, our expertise.



Andrei



Re: std.math performance (SSE vs. real)

2014-06-28 Thread Andrei Alexandrescu via Digitalmars-d

On 6/28/14, 6:02 PM, Tofu Ninja wrote:

I think this thread is getting out of hand. The main point was to
get float and double overloads for std.math.

This whole discussion about numeric stability, the naming of
double and float, the state of real all of it is a little bit
ridiculous.

Numerical stability is not really related to getting faster
overloads other than the obvious fact that it is a trade off.
Float and double do not need a name change. Real also does not
need a change.

I think this thread needs to refocus on the main point, getting
math overloads for float and double and how to mitigate any
problems that might arise from that.


Yes please. -- Andrei


Re: std.math performance (SSE vs. real)

2014-06-28 Thread Andrei Alexandrescu via Digitalmars-d

On 6/28/14, 5:42 PM, Walter Bright wrote:

On 6/28/2014 6:49 AM, Andrei Alexandrescu wrote:

The only problem is/would be when the language forces one choice over
the other.
Both options of maximum performance and maximum precision should be
handily
accessible to D users.



That's a much more reasonable position than we should abandon 80 bit
reals.


Awesome! -- Andrei


Re: std.math performance (SSE vs. real)

2014-06-28 Thread H. S. Teoh via Digitalmars-d
On Sat, Jun 28, 2014 at 05:16:53PM -0700, Walter Bright via Digitalmars-d wrote:
 On 6/28/2014 3:57 AM, Russel Winder via Digitalmars-d wrote:
[...]
 Or indeed when calculating anything to do with money.
 
 You're better off using 64 bit longs counting cents to represent money
 than using floating point. But yeah, counting money has its own
 special problems.

For counting money, I heard that the recommendation is to use
fixed-point arithmetic (i.e. integer values in cents).


T

-- 
The best compiler is between your ears. -- Michael Abrash


Re: std.math performance (SSE vs. real)

2014-06-28 Thread H. S. Teoh via Digitalmars-d
On Sat, Jun 28, 2014 at 08:41:24PM -0700, Andrei Alexandrescu via Digitalmars-d 
wrote:
 On 6/28/14, 6:02 PM, Tofu Ninja wrote:
[...]
 I think this thread needs to refocus on the main point, getting
 math overloads for float and double and how to mitigate any
 problems that might arise from that.
 
 Yes please. -- Andrei

Let's see the PR!

And while we're on the topic, what about working on making std.math
CTFE-able? So far, CTFE simply doesn't support fundamental
floating-point operations like isInfinity, isNaN, signbit, to name a
few, because CTFE does not allow accessing the bit representation of
floating-point values. This is a big disappointment for me -- it defeats
the power of CTFE by making it unusable if you want to use it to
generate pre-calculated tables of values.

Perhaps we can introduce some intrinsics for implementing these
functions so that they work both in CTFE and at runtime?

https://issues.dlang.org/show_bug.cgi?id=3749

Thanks to Iain's hard work on std.math, now we have software
implementations for all(?) the basic math functions, so in theory they
should be CTFE-able -- except that some functions require access to the
floating-point bit representation, which CTFE doesn't support. All it
takes is to these primitives, and std.math will be completely CTFE-able
-- a big step forward IMHO.


T

-- 
Talk is cheap. Whining is actually free. -- Lars Wirzenius


Re: std.math performance (SSE vs. real)

2014-06-28 Thread deadalnix via Digitalmars-d
On Sunday, 29 June 2014 at 04:38:31 UTC, H. S. Teoh via 
Digitalmars-d wrote:
On Sat, Jun 28, 2014 at 05:16:53PM -0700, Walter Bright via 
Digitalmars-d wrote:

On 6/28/2014 3:57 AM, Russel Winder via Digitalmars-d wrote:

[...]

Or indeed when calculating anything to do with money.

You're better off using 64 bit longs counting cents to 
represent money

than using floating point. But yeah, counting money has its own
special problems.


For counting money, I heard that the recommendation is to use
fixed-point arithmetic (i.e. integer values in cents).


T


MtGox was using float.


Re: typeid of an object whose static type is an interface returns the interface

2014-06-28 Thread Kapps via Digitalmars-d

On Friday, 27 June 2014 at 21:23:52 UTC, Mark Isaacson wrote:
If I have a variable whose static type is an interface and I 
call

typeid on it, I get the interface back, not the dynamic type.
This seems like confusing behavior. Is this the intended result?

I recognize that one needs some amount of state to perform the
dynamic type lookup, and so it is on that thought that a reason
for this might be based.


Interfaces are not necessarily Objects (particularly with the 
case of IUnkown or extern(C++)), and are handled somewhat 
differently from objects.


When you cast to Object, you're actually subtracting a few bytes 
from the pointer to get back to the Object, so technically the 
variable refers not to Object but to the interface. It is a bit 
odd (along with some of the other side-effects), but it does make 
some sense since you're referring to the interface and not to an 
Object.


That being said, I'm not 100% sure whether this is the intended 
behaviour when you actually do point to a class derived from 
Object.


Re: std.math performance (SSE vs. real)

2014-06-28 Thread Sean Kelly via Digitalmars-d

On Sunday, 29 June 2014 at 00:16:51 UTC, Walter Bright wrote:

On 6/28/2014 3:57 AM, Russel Winder via Digitalmars-d wrote:


Or indeed when calculating anything to do with money.


You're better off using 64 bit longs counting cents to 
represent money than using floating point. But yeah, counting 
money has its own special problems.


Maybe if by money you mean dollars in the bank.  But for 
anything much beyond that you're doing floating point math.  
Often with specific rules for how and when rounding should occur. 
 Perhaps interestingly, it's typical for hedge funds to have a 
rounding partner who receives all the fractional pennies that 
are lost when divvying up the income for the other investors.


Re: What is best way to communicate between computer in local network ?

2014-06-28 Thread Sean Kelly via Digitalmars-d-learn

On Friday, 27 June 2014 at 13:03:20 UTC, John Colvin wrote:


It's an application and network dependant decision, but I would 
suggest http://code.dlang.org/packages/zmqd as suitable for 
most situations.


Yeah, this would be my first choice.  Or HTTP if integration with 
other applications is an option.  I really like JSON-RPC, though 
it seems to not get much attention.  Longer term, I'd like to 
extend the messaging in std.concurrency to allow interprocess 
communication as well.


Re: What is best way to communicate between computer in local network ?

2014-06-28 Thread John Colvin via Digitalmars-d-learn

On Saturday, 28 June 2014 at 16:08:18 UTC, Sean Kelly wrote:

On Friday, 27 June 2014 at 13:03:20 UTC, John Colvin wrote:


It's an application and network dependant decision, but I 
would suggest http://code.dlang.org/packages/zmqd as suitable 
for most situations.


Yeah, this would be my first choice.  Or HTTP if integration 
with other applications is an option.  I really like JSON-RPC, 
though it seems to not get much attention.  Longer term, I'd 
like to extend the messaging in std.concurrency to allow 
interprocess communication as well.


An MPI backend for std.concurrency would be a game-changer for D 
in certain scientific circles.


Re: What is best way to communicate between computer in local network ?

2014-06-28 Thread John Colvin via Digitalmars-d-learn

On Saturday, 28 June 2014 at 16:20:31 UTC, John Colvin wrote:

On Saturday, 28 June 2014 at 16:08:18 UTC, Sean Kelly wrote:

On Friday, 27 June 2014 at 13:03:20 UTC, John Colvin wrote:


It's an application and network dependant decision, but I 
would suggest http://code.dlang.org/packages/zmqd as suitable 
for most situations.


Yeah, this would be my first choice.  Or HTTP if integration 
with other applications is an option.  I really like JSON-RPC, 
though it seems to not get much attention.  Longer term, I'd 
like to extend the messaging in std.concurrency to allow 
interprocess communication as well.


An MPI backend for std.concurrency would be a game-changer for 
D in certain scientific circles.


Note: I don't have much love for MPI, but it's the only practical
option on many clusters currently.


Re: What is best way to communicate between computer in local network ?

2014-06-28 Thread Russel Winder via Digitalmars-d-learn
On Sat, 2014-06-28 at 16:21 +, John Colvin via Digitalmars-d-learn
wrote:

  An MPI backend for std.concurrency would be a game-changer for 
  D in certain scientific circles.

Pragmatically, I think this would be a good idea…

 Note: I don't have much love for MPI, but it's the only practical
 option on many clusters currently.

…philosophically MPI sucks.

The problem is that C, C++, Fortran, Chapel, X10 all assume MPI is the
cluster transport.

Fortunately in the Groovy, GPars, Java, Scala, Akka world, there are
other options, ones that are much nicer :-)

Sadly, I don't have time to contribute to any constructive work on this
just now. And I ought to be doing a review and update to
std.parallelism…

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Re: What is best way to communicate between computer in local network ?

2014-06-28 Thread Sean Kelly via Digitalmars-d-learn
On Saturday, 28 June 2014 at 17:11:51 UTC, Russel Winder via 
Digitalmars-d-learn wrote:


Sadly, I don't have time to contribute to any constructive work 
on this

just now. And I ought to be doing a review and update to
std.parallelism…


That's fine.  I have zero free time until August.


Re: SList: How do I use linearRemove?

2014-06-28 Thread sigod via Digitalmars-d-learn

On Thursday, 26 June 2014 at 16:50:38 UTC, Lemonfiend wrote:

This doesn't (why?):
auto s = SList!int(1, 2, 3, 4, 5);
auto s2 = SList!int(1, 2, 3, 4, 5);
auto r = s2[];
popFrontN(r, 1);
auto r1 = s.linearRemove(r);


This is intended behavior: 
https://issues.dlang.org/show_bug.cgi?id=12999


Can't modify this

2014-06-28 Thread Ary Borenszweig via Digitalmars-d-learn

This doesn't work:

class Foo {
  this() {
this = new Foo;
  }
}

Error: Cannot modify 'this'

However you can do this:

class Foo {
  this() {
auto p = this;
*p = new Foo();
  }
}

It even changes the value of this!

Should that compile? I mean, it's the same as modifying 'this'...


Re: Can't modify this

2014-06-28 Thread H. S. Teoh via Digitalmars-d-learn
On Sat, Jun 28, 2014 at 05:40:19PM -0300, Ary Borenszweig via 
Digitalmars-d-learn wrote:
 This doesn't work:
 
 class Foo {
   this() {
 this = new Foo;
   }
 }
 
 Error: Cannot modify 'this'
 
 However you can do this:
 
 class Foo {
   this() {
 auto p = this;
 *p = new Foo();
   }
 }
 
 It even changes the value of this!
 
 Should that compile? I mean, it's the same as modifying 'this'...

I'd say, file an enhancement request on the bug tracker.

However, there comes a point, where given enough indirections, it would
be infeasible for the compiler to figure out exactly where everything
points, and so you'll be able to circumvent the compiler check somehow.
If you're out to thwart the compiler, then eventually you will succeed,
but it begs the question, why?


T

-- 
Never wrestle a pig. You both get covered in mud, and the pig likes it.


Re: Can't modify this

2014-06-28 Thread Ary Borenszweig via Digitalmars-d-learn

On 6/28/14, 6:21 PM, H. S. Teoh via Digitalmars-d-learn wrote:

On Sat, Jun 28, 2014 at 05:40:19PM -0300, Ary Borenszweig via 
Digitalmars-d-learn wrote:

This doesn't work:

class Foo {
   this() {
 this = new Foo;
   }
}

Error: Cannot modify 'this'

However you can do this:

class Foo {
   this() {
 auto p = this;
 *p = new Foo();
   }
}

It even changes the value of this!

Should that compile? I mean, it's the same as modifying 'this'...


I'd say, file an enhancement request on the bug tracker.

However, there comes a point, where given enough indirections, it would
be infeasible for the compiler to figure out exactly where everything
points, and so you'll be able to circumvent the compiler check somehow.
If you're out to thwart the compiler, then eventually you will succeed,
but it begs the question, why?


T


I think that if you disallow taking the address of `this`, then the 
problem is solved.


This is not a big issue (more a curiosity). I just wanted to know what 
is the correct way to do in this case.




Why is the Win32 boilerplate the way it is?

2014-06-28 Thread Jeremy Sorensen via Digitalmars-d-learn
I found an example of boilerplate code for Win32 programming in D 
here:

http://wiki.dlang.org/D_for_Win32

I have some questions.
1. It appears that the call to myWinMain from WinMain is to 
ensure that any exception or error is caught. At first glance it 
looks like this is to ensure that runtime.terminate() gets 
called, but in fact it doesn't, the catch block doesn't do it and 
there is no scope(exit).  Is this a problem? (And what would 
happen if you didn't catch the exception?)
2. Why does the boilerplate return 0 on success and failure? (If 
the return code is irrelevant, why the comment that says failed 
next to the return code?)
3. I can't imagine a technical reason why the myWinMain signature 
has to match the WinMain signature. Wouldn't it be better to omit 
the hPrevInstance since it isn't used? (Or are we preserving 
backwards compatibility with Win16?).


If there is a resource somewhere that explains all this I would 
happy to consult it but I couldn't find anything.


Thanks.


Re: GC.calloc(), then what?

2014-06-28 Thread safety0ff via Digitalmars-d-learn

On Friday, 27 June 2014 at 23:26:55 UTC, Ali Çehreli wrote:


I appreciated your answers, which were very helpful. What I 
meant was, I was partially enlightened but still had some 
questions. I am in much better shape now. :)


Yea, I understood what you meant. :)



[Issue 12990] utf8 string not read/written to windows console

2014-06-28 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=12990

--- Comment #5 from sum.pr...@gmail.com ---
This time it returned an empty array ([]).

Thanks.

--


[Issue 11946] need 'this' to access member when passing field to template parameter

2014-06-28 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=11946

Jacob Carlborg d...@me.com changed:

   What|Removed |Added

 CC||d...@me.com

--- Comment #39 from Jacob Carlborg d...@me.com ---
I'm not sure if this is related but I have a similar case as Vladimir's first
post:



module foo;

static int f(A...)() { pragma(msg, typeof(A)); return 0; }



module bar;

import foo;

struct S { private int x; enum y = f!x(); }



Here x is private and I'm accessing the tuple in f. The error I get is 
Error: struct bar.S member x is not accessible. This worked fine in DMD
2.064.2.

--


[Issue 12922] Solution is always rebuilt in Visual Studio 2010

2014-06-28 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=12922

Elias ariett...@gmail.com changed:

   What|Removed |Added

 Status|NEW |RESOLVED
 Resolution|--- |FIXED

--- Comment #2 from Elias ariett...@gmail.com ---
OK, thank you for this workaround!

--


[Issue 10018] Value range propagation for immutable variables

2014-06-28 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=10018

--- Comment #11 from github-bugzi...@puremagic.com ---
Commits pushed to master at https://github.com/D-Programming-Language/dmd

https://github.com/D-Programming-Language/dmd/commit/45a26d5d89c22b74bf4cc9f37eaad8c21c53ea80
Issue 10018 - Add VRP support for const/immutable variables

https://github.com/D-Programming-Language/dmd/commit/2ffcbeb68e06b31d6be8553acf3462a2d1926b12
Merge pull request #3699 from lionello/bug10018

Issue 10018 - Add VRP support for const/immutable variables

--


[Issue 10018] Value range propagation for immutable variables

2014-06-28 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=10018

Kenji Hara k.hara...@gmail.com changed:

   What|Removed |Added

 Status|REOPENED|RESOLVED
 Resolution|--- |FIXED
   Severity|normal  |enhancement

--


[Issue 13000] Casts should be removed to utilize features of inout

2014-06-28 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=13000

Kenji Hara k.hara...@gmail.com changed:

   What|Removed |Added

   Keywords||pull
   Hardware|x86 |All

--- Comment #1 from Kenji Hara k.hara...@gmail.com ---
https://github.com/D-Programming-Language/phobos/pull/2279

--


[Issue 9754] Bad codegen with 0-size args and -fPIC -O

2014-06-28 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=9754

Vladimir Panteleev thecybersha...@gmail.com changed:

   What|Removed |Added

 Status|NEW |RESOLVED
 Resolution|--- |FIXED

--- Comment #2 from Vladimir Panteleev thecybersha...@gmail.com ---
Apparently it was fixed by this pull request:
https://github.com/D-Programming-Language/dmd/pull/1752

--


[Issue 9754] Bad codegen with 0-size args and -fPIC -O

2014-06-28 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=9754

Vladimir Panteleev thecybersha...@gmail.com changed:

   What|Removed |Added

 Resolution|FIXED   |WORKSFORME

--


[Issue 9754] Bad codegen with 0-size args and -fPIC -O

2014-06-28 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=9754

--- Comment #3 from Vladimir Panteleev thecybersha...@gmail.com ---
I think this bug might actually be a duplicate of issue 9722.

--


[Issue 6498] [CTFE] copy-on-write is slow and causes huge memory usage

2014-06-28 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=6498

Per Nordlöw per.nord...@gmail.com changed:

   What|Removed |Added

 CC||per.nord...@gmail.com

--- Comment #3 from Per Nordlöw per.nord...@gmail.com ---
Don: Is there a Github PR or branch for your changes or are these things
normally kept secret because this issue has a bounty?

--


[Issue 6498] [CTFE] copy-on-write is slow and causes huge memory usage

2014-06-28 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=6498

Iain Buclaw ibuc...@gdcproject.org changed:

   What|Removed |Added

 CC||ibuc...@gdcproject.org

--- Comment #4 from Iain Buclaw ibuc...@gdcproject.org ---
FYI, all PR's have been merged in.

I won't bother listing them all (there's a lot that was done over 2012/2013). 
There has been no work on this since June 2013 IIRC.

https://github.com/D-Programming-Language/dmd/pull/1778#issuecomment-19964496


What should be focused on (thanks to Walter's idea of allocating but not
freeing memory) is to limit just how much memory is allocated from CTFE.  By
possibly find ways to re-use and not re-allocate memory, or maybe giving CTFE
its own allocator (it is a backend in its own right, afterall).

--


[Issue 13000] Casts should be removed to utilize features of inout

2014-06-28 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=13000

github-bugzi...@puremagic.com changed:

   What|Removed |Added

 Status|NEW |RESOLVED
 Resolution|--- |FIXED

--


[Issue 13000] Casts should be removed to utilize features of inout

2014-06-28 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=13000

--- Comment #2 from github-bugzi...@puremagic.com ---
Commits pushed to master at https://github.com/D-Programming-Language/phobos

https://github.com/D-Programming-Language/phobos/commit/18efc8c5dc0606aaf8589d01d0f7229a4ce3277c
fix Issue 13000 - Casts should be removed to utilize features of inout

https://github.com/D-Programming-Language/phobos/commit/aab91e5f3f42d3838fddf98c956baeab9741bdad
Merge pull request #2279 from 9rnsr/fix13000

Issue 13000 - Casts should be removed to utilize features of inout

--


[Issue 12996] SList: linearRemove cannot remove root node

2014-06-28 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=12996

--- Comment #2 from github-bugzi...@puremagic.com ---
Commits pushed to master at https://github.com/D-Programming-Language/phobos

https://github.com/D-Programming-Language/phobos/commit/deb84d53ec799a57931c7e8da039938a5dbd142a
Fix Issue 12996 - SList: linearRemove cannot remove root node

https://github.com/D-Programming-Language/phobos/commit/433fdd4f346cb326f692071c987b28361624d008
Merge pull request #2271 from sigod/issue_12996

Fix Issue 12996 - SList: linearRemove cannot remove root node

--


[Issue 13002] New: DMD 2.066 prep: 32-bit build fails on Ubuntu via create_dmd_release

2014-06-28 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=13002

  Issue ID: 13002
   Summary: DMD 2.066 prep: 32-bit build fails on Ubuntu via
create_dmd_release
   Product: D
   Version: D2
  Hardware: x86
OS: Linux
Status: NEW
  Severity: regression
  Priority: P1
 Component: DMD
  Assignee: nob...@puremagic.com
  Reporter: edwards...@gmail.com

The following issues occur when attempting to compile DMD on Ubuntu, 
request assistance in identifying appropriate resolution:

Building DMD 32-bit
backend/strtold.c: In function ‘longdouble strtold_dm(const char*, char**)’:
backend/strtold.c:346:36: warning: dereferencing type-punned pointer 
will break strict-aliasing rules [-Wstrict-aliasing]
*(long long *)ldval = msdec;
^
Copying file '/tmp/.create_dmd_release/dmd/src/dmd' to 
'/tmp/.create_dmd_release/dmd/src/dmd32'.
Building Druntime 32-bit
Building Phobos 32-bit
std/mmfile.d(344): Deprecation: alias core.sys.posix.sys.mman.MAP_ANON 
is deprecated - Please use core.sys.linux.sys.mman for non-POSIX extensions
std/mmfile.d(344): Deprecation: alias core.sys.posix.sys.mman.MAP_ANON 
is deprecated - Please use core.sys.linux.sys.mman for non-POSIX extensions
Building Druntime Docs
Building Phobos Docs
std/bitmanip.d(2354): Warning: Ddoc: function declaration has no 
parameter 'index'
std/bitmanip.d(2354): Warning: Ddoc: parameter count mismatch
std/bitmanip.d(2958): Warning: Ddoc: parameter count mismatch
std/bitmanip.d(3314): Warning: Ddoc: parameter count mismatch
std/getopt.d(388): Warning: Ddoc: Stray '('. This may cause incorrect 
Ddoc output. Use $(LPAREN) instead for unpaired left parentheses.
std/mmfile.d(344): Deprecation: alias core.sys.posix.sys.mman.MAP_ANON 
is deprecated - Please use core.sys.linux.sys.mman for non-POSIX extensions
std/parallelism.d(1807): Warning: Ddoc: function declaration has no 
parameter 'source'
std/parallelism.d(1807): Warning: Ddoc: function declaration has no 
parameter 'bufSize'
std/parallelism.d(1807): Warning: Ddoc: function declaration has no 
parameter 'workUnitSize'
std/process.d(1615): Warning: Ddoc: function declaration has no 
parameter 'program'
std/process.d(1615): Warning: Ddoc: function declaration has no 
parameter 'command'
std/process.d(1615): Warning: Ddoc: parameter count mismatch
std/process.d(1953): Warning: Ddoc: function declaration has no 
parameter 'program'
std/process.d(1953): Warning: Ddoc: function declaration has no 
parameter 'command'
std/process.d(1953): Warning: Ddoc: parameter count mismatch
std/random.d(1601): Warning: Ddoc: Stray ')'. This may cause incorrect 
Ddoc output. Use $(RPAREN) instead for unpaired right parentheses.
std/random.d(1601): Warning: Ddoc: Stray ')'. This may cause incorrect 
Ddoc output. Use $(RPAREN) instead for unpaired right parentheses.
std/string.d(1044): Warning: Ddoc: parameter count mismatch
std/string.d(1109): Warning: Ddoc: parameter count mismatch
std/string.d(1189): Warning: Ddoc: parameter count mismatch
std/string.d(1271): Warning: Ddoc: parameter count mismatch
std/uni.d(2333): Warning: Ddoc: Stray ')'. This may cause incorrect Ddoc 
output. Use $(RPAREN) instead for unpaired right parentheses.
std/uni.d(2333): Warning: Ddoc: Stray ')'. This may cause incorrect Ddoc 
output. Use $(RPAREN) instead for unpaired right parentheses.
std/uni.d(2333): Warning: Ddoc: Stray ')'. This may cause incorrect Ddoc 
output. Use $(RPAREN) instead for unpaired right parentheses.
make: *** No rule to make target 
`../web/phobos-prerelease/std_container_package.html', needed by `html'. 
Stop.
make: *** Waiting for unfinished jobs
std/net/curl.d(2450): Warning: Ddoc: function declaration has no 
parameter 'dlTotal'
std/net/curl.d(2450): Warning: Ddoc: function declaration has no 
parameter 'dlNow'
std/net/curl.d(2450): Warning: Ddoc: function declaration has no 
parameter 'ulTotal'
std/net/curl.d(2450): Warning: Ddoc: function declaration has no 
parameter 'ulNow'
std/net/curl.d(2450): Warning: Ddoc: parameter count mismatch
std/net/curl.d(3058): Warning: Ddoc: function declaration has no 
parameter 'dlTotal'
std/net/curl.d(3058): Warning: Ddoc: function declaration has no 
parameter 'dlNow'
std/net/curl.d(3058): Warning: Ddoc: function declaration has no 
parameter 'ulTotal'
std/net/curl.d(3058): Warning: Ddoc: function declaration has no 
parameter 'ulNow'
std/net/curl.d(3058): Warning: Ddoc: parameter count mismatch
std/net/curl.d(3396): Warning: Ddoc: function declaration has no 
parameter 'dlTotal'
std/net/curl.d(3396): Warning: Ddoc: function declaration has no 
parameter 'dlNow'
std/net/curl.d(3396): Warning: Ddoc: function declaration has no 
parameter 'ulTotal'
std/net/curl.d(3396): Warning: Ddoc: function declaration has no 
parameter 'ulNow'
std/net/curl.d(3396): Warning: Ddoc: parameter count mismatch
create_dmd_release: Error: Command failed (ran from dir 
'/tmp/.create_dmd_release/phobos'): 

[Issue 12981] Can't refer to 'outer' from mixin template

2014-06-28 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=12981

Kenji Hara k.hara...@gmail.com changed:

   What|Removed |Added

   Keywords||pull
   Hardware|x86_64  |All
 OS|Linux   |All

--- Comment #1 from Kenji Hara k.hara...@gmail.com ---
https://github.com/D-Programming-Language/dmd/pull/3700

--


[Issue 12859] Read-modify-write operation for shared variable in Phobos

2014-06-28 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=12859

Kenji Hara k.hara...@gmail.com changed:

   What|Removed |Added

   Keywords||pull

--- Comment #2 from Kenji Hara k.hara...@gmail.com ---
https://github.com/D-Programming-Language/phobos/pull/2281

--


[Issue 13001] Support VRP for ternary operator (CondExp)

2014-06-28 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=13001

--- Comment #2 from github-bugzi...@puremagic.com ---
Commits pushed to master at https://github.com/D-Programming-Language/dmd

https://github.com/D-Programming-Language/dmd/commit/6627d35269e34e33919f11463d9208a3b598b71a
Issue 13001 - Add VRP support for ternary operator ?:

https://github.com/D-Programming-Language/dmd/commit/3126735908e8f6db1428ea3b2c74438573a14648
Merge pull request #3698 from lionello/bug13001

Issue 13001 - Add VRP support for ternary operator ?:

--


  1   2   >