Thanks again, H. S. Teoh,

for yet another informative and well informed post.

Allow me one remark though.

"Easy to read" can mean a lot of things and coming from soneone with a strong C/C++ background, it usually doesn't mean that much. No offense intended, I do very well remember my own attitude over the years.

Let me put it in the form of a confession. I confess that 25 years ago I considered Pascal programmers to be lousy hobby boys, Ada programmers bureaucratic perverts and Eiffel guys simply insane perverts.

It probably doesn't shine a nice light on myself but I have to follow up with another and potentially more painful confession: It took me over a decade to even think about my position and C/C++ and another half decade to consider it possible that Pascal, Ada, and Eiffel guys actually might have gotten something right, if only minor details ...

Today, some hundred (or thousand?) hours of painful search for bugs or problems later due to things like '=' in an if clause, I'm ready to admit that any language using '=' for assignment rather than, for instance, ':=' is effectively creating a trap for the people using it.

Today I look at Ada and Eiffel with great respect.

I've seen hints from the authors of D themselves that '++' and '--' might not be the wisest way of action. So I stand here asking "Why the hell did they implement them?" It would be very simple to implement that logic in an editor for those who feel that life without '++' is impossible to automagically expand "x++" to "X := x + 1". Having seen corporations in serious trouble because their system broke (or happily continued to run albeit producing erroneous data ...) for this "small detail" I have a hard time to defend '++'. ("Save 5 sec typing/day and risk your company!").

Another issue (from an Ada background): Why "byte" ... (the complete series up to) ... "cent"? Bytes happen to be important for CPUs - not for the world out there. I wouldn't like to count the gazillion cases where code went belly up because something didn't fit in 16 bits. Why not the other way around, why not the whole she-bang, i.e., 4 (or) 8 bytes as default for a single fixed point type ("int") and a mechanism to specify what actually is needed? So for days in a month we'd have "int'b5 dpm;" (2 pow x notation) or "int'32dpm;"? Even funnier, even D's authors seems to have had thoughts in that direction (but not following them) when designing the dyn array mechanism where a dyn array effectively has 2 pow x based de facto storage (size 6 (elements officially used) de facto is an 8 element array).

This happens to be a nice example for perspective. C's perspective (by necessity) was resource oriented along the line "offer an 8bit int so as to not waste 16bits were 8bits suffice". Yet we still do that in the 21st century rather than acting more *human oriented* by putting the decision for the size to the human. Don't underestimate that! The mere action of reflecting how much storage is needed is valuable and helps to avoid errors.

D is, no doubts, an excellent and modern incarnation of C/C++. As far as I'm concerned D is *the* best C/C++ incarnation ever, hands down.

But is '=' really a holy issue? Would all D programmers have run away if D had ':=' as assignment op?

I wish, D had done all the miraculos things it did - and then on top, had allowed itself the luxury to be more human centric rather than sticking to a paradigm that was necessary 50 years ago (and even then not good but necessary)

BTW: I write this because D means a lot to me not to bash it. For Java, to bname an ugly excample, I never wasted a single line of criticism; t'sjust not worth it. So, please, read what I say as being written in a warm tone and not negatively minded.

Reply via email to