On Fri, 09 Dec 2011 03:19:34 +0200, Robert Jacques <sandf...@jhu.edu>
wrote:

On Thu, 08 Dec 2011 13:17:44 -0500, so <s...@so.so> wrote:

On Thu, 08 Dec 2011 19:25:10 +0200, Dejan Lekic <dejan.le...@gmail.com>
wrote:


type a = a + 2; // compiles with no errors, no warnings, no explosions
(that i know of)

If "type" has the default initialiser, then what is the problem?

What does it do in both C and D context?
1. It does different things.
2. C version even worse because of debug/release difference.
3. If D still does this, it violates the B-D rule (either does the same
thing or doesn't compile at all. Actually there are many things violate
this rule...)

type a; // even this simple code violates B-D rule.

What is the use case?
. Is there any?
. If there is not any you think, but still compiler allows it, it
generates nothing but bugs (from experience :) ).

Question should be, why does it exist at all?

Actually, these statements don't violate the B-D rule. In C/C++ the value of 'a' prior to use is implementation defined (i.e. it is undefined by the spec) and in practice tends to be some random stack value. D just sets that value to a consistent default, which is completely compliant with the C spec.

I disagree. As you said while in C/C++ same thing gets assigned random values, in D it doesn't. As a result, it changes both the coders' expectations and programs outcome. Even the definitions are different. Undefined there and defined here.
I am not sure how to express this.

As far as i understand you are saying; Because uninitialized variables are undefined in C, D can act as another C compiler and interpret this rule as it pleases. But in practice no compiler i know does that. And "assuming the variable initialized to some value" is a bad programming practice, which i should say the most popular among its kind.
So it clashes with another thing about D, pragmatism.

Reply via email to