On Thu, 08 Dec 2011 21:23:11 -0500, so <s...@so.so> wrote:
On Fri, 09 Dec 2011 03:19:34 +0200, Robert Jacques <sandf...@jhu.edu>
wrote:

On Thu, 08 Dec 2011 13:17:44 -0500, so <s...@so.so> wrote:

On Thu, 08 Dec 2011 19:25:10 +0200, Dejan Lekic <dejan.le...@gmail.com>
wrote:


type a = a + 2; // compiles with no errors, no warnings, no explosions
(that i know of)

If "type" has the default initialiser, then what is the problem?

What does it do in both C and D context?
1. It does different things.
2. C version even worse because of debug/release difference.
3. If D still does this, it violates the B-D rule (either does the same
thing or doesn't compile at all. Actually there are many things violate
this rule...)

type a; // even this simple code violates B-D rule.

What is the use case?
. Is there any?
. If there is not any you think, but still compiler allows it, it
generates nothing but bugs (from experience :) ).

Question should be, why does it exist at all?

Actually, these statements don't violate the B-D rule. In C/C++ the
value of 'a' prior to use is implementation defined (i.e. it is
undefined by the spec) and in practice tends to be some random stack
value. D just sets that value to a consistent default, which is
completely compliant with the C spec.

I disagree. As you said while in C/C++ same thing gets assigned random
values,
in D it doesn't. As a result, it changes both the coders' expectations and
programs
outcome. Even the definitions are different. Undefined there and defined
here.
I am not sure how to express this.

As far as i understand you are saying; Because uninitialized variables are
undefined in C,
D can act as another C compiler and interpret this rule as it pleases. But
in practice no compiler i know does that.
In a discussion about language specifications, practical implementation details 
seem tangential to me. Besides, many C++ compilers do set variables to a 
default bit pattern in debug mode, in order to better detect and account for 
uninitialized variables. D's major difference is that it also does this in 
release mode. Besides, strictly speaking, the B-D rule is about valid, portable 
C/C++ code, which (arguably) isn't anything that uses uninitialized variables. 
In other words, the B-D rule isn't about D mimicking the behavior of a 
particular C/C++ compiler, it's about the C spec.

And "assuming the variable initialized to some value" is a bad programming
practice, which i should say the most popular among its kind.
I totally agree, from a normal control flow perspective. But .init has many 
uses, particularly in error detection and and repeatability of .init greatly 
eases debugging.

So it clashes with another thing about D, pragmatism.
I'm not sure what you mean by that.

Reply via email to