Michiel Helvensteijn wrote:
"The compiler allows omitting type declarations only when types can be
unambiguously inferred from context."

That's not exactly true, is it? A small non-negative integer literal could
be an integer of any width or signedness. Yet 'int' is arbitrarily chosen
for some reason. There are also multiple floating point types.

My points:
* The line I quoted is incorrect. Int/float literals are not unambiguous.
* D literals can have a suffix specifying the exact type. Perhaps that's
worth mentioning.

Thanks for your comments, I added them to my todo list.

I find your style of writing a bit too informal, though easy to read.

I swear I was trying to keep wit down. (I mean if I'm left to my own devices... "The Case for D".) One very real problem I'm now experiencing is that I need to literally write a doctoral thesis at day and a book at night. It's often difficult to handle the swing between the required styles. On the upside, heck, I think the dissertation will turn out to be one of the more readable ones :o).


Andrei

Reply via email to