On 11/10/2011 9:23 PM, Walter Bright wrote:
On 11/10/2011 4:53 PM, bearophile wrote:
This is a (minor) wart of D. C# got this better, using the "sbyte" and
"ubyte" names. We have discussed this in past :-)

Sorry, but I think this is meaningless trivia.

There's been a lot of agonizing over names in D lately. They are a
soul-sucking quagmire of wasting time.


Vote++. Really bad names hurt, but there's no such thing as a perfect name. Any name will be confusing in some context unless it's ridiculouslyVerboseAndWastesALotOfSpaceAndIsImpossibleToTypeOrRead. Even when names do get confusing, this can be solved by using static import for whatever module the most confusion is coming from. (For example, due to its terse naming scheme I usually use static import for std.file.) Typing the fully qualified name resolves any confusion.


When I program in D, I take care of keeping often in my mind that
"int" is
not an integer number, but a unchecked signed 2-complement
32-bits-wide bit
field. Forgetting it causes several bugs and troubles.

And when you program in floating point, you have to keep in mind the
limitations of the exponent and the mantissa.

And when you write a recursive function, you have to be mindful of stack
limitations.

There's no escaping what a computer is. In any engineering profession,
you've got to keep in mind that your design becomes real parts made out
of real materials, and those materials are far from ideal. Forget that
at one's peril.

Interesting argument and I completely agree. This illustrates a big difference between engineers/programmers/natural scientists and mathematicians/computer scientists: The former view equations/theory/models as a convenient approximation of reality, to be abandoned when it's no longer useful, and view all abstractions as leaky. To the latter, the theory/equations/models have a life of their own.

D is a language designed by an engineering mind (two if you count Andrei, who did his undergrad in EE but his Ph.D. in computer science), for engineering minds. I also have an engineering/natural science background and very little formal comp sci or theoretical/abstract mathematics training, which might explain D's appeal to me. Those that are into theoretical purity will unavoidably hate the language. Those that care more about being able to abstract away exactly what they want to and nothing more will love it.


----

This reminds me of the experience of a good friend of mine, very smart,
who decided to learn Fortran by reading the DEC Fortran reference
manual. He wrote his first program, and it worked perfectly the first
time (did I say he was very smart?), except for one detail. It ran
incredibly slowly.

He poured over his listing and the manual, and could find nothing wrong.
Finally, he took the listing (back in those days, we worked from paper
listings) to my roommate and asked for help.

My roommate looked at it and says, there's your problem. You are writing
to a file by opening it, appending a single character, then closing it,
in a loop. Don't you know you're supposed to open the file once, do all
the writes, then close it?

No, says my friend, the manual said nothing about that, although when
you think about how the computer must work when writing a file, it is
obvious.

Again, pretty insightful. One of the conclusions I've come to in terms of pedagogy is that no amount of rote knowledge can ever substitute for having a good mental model of a system. When I was an undergrad the biggest difference I noticed between the successful and unsuccessful students was that the successful ones would try to form a comprehensive mental model of the material, where the unsuccessful students would focus on rote memorizing facts and procedures. Similarly, when I TA'd a course a couple years ago, I tried to encourage the professor to ask exam questions that were as hard as possible to get by rote (no canned procedure would work) but as easy as possible if you had a solid mental model of the material.

Reply via email to