On Thu, 08 Dec 2011 19:25:10 +0200, Dejan Lekic <dejan.le...@gmail.com> wrote:


type a = a + 2; // compiles with no errors, no warnings, no explosions (that i know of)

If "type" has the default initialiser, then what is the problem?

What does it do in both C and D context?
1. It does different things.
2. C version even worse because of debug/release difference.
3. If D still does this, it violates the B-D rule (either does the same thing or doesn't compile at all. Actually there are many things violate this rule...)

type a; // even this simple code violates B-D rule.

What is the use case?
. Is there any?
. If there is not any you think, but still compiler allows it, it generates nothing but bugs (from experience :) ).

Question should be, why does it exist at all?

Reply via email to