Paulo Pinto:
but thanks to the increase in security concern in software, actually it seems to be picking up users in Europe, for projects where human lifes are at risk.<
Recently I am seeing a little grow of the interest in Ada. Maybe they have accepted that despite the flaws of Ada, it's the best tool for certain purposes. But more probably, several other more important factors are at play, probably political too.
But D would, of course, be an easier upgrade path for C or C++ developers.<
In my opinion for a decent C or C++ programmer it's not too much hard to learn the Zen of Ada, they share similar roots (Ada is closer to Pascal, C++ closer to C, but the paradigms used are similar. Example: Ada templates require explicit instantiation, but learning this doesn't require a C++ programmer to change his/her/hir brain a lot).
D seems fit to write videogame engines, but despite D is safer than C and C++, for high integrity software, I think D will need an external tool that enforces very strict safe coding standards, because @safe can't be enough. Example: on default Ada all integral values don't overflow silently. Another example: there are strict and safe built-in ways to use multi-cores. Another example: kinds of pointers.
I have not used Ada a lot, but I like how you usually define (strong) types for most classes of variables, like for integral values, each with their range, if they are sub-ranges (subtypes) of other strong ranges, and so on.
A small example. If you have two matrices, where one contains (r,c) row-column pairs of indexes of the other matrix, it's easy to enforce the tuple items to be inside the number of rows or columns of the other matrix. If the second matrix has to contain only positive values, plus let's say the -3 -2 and -1 values to signal special cases, it's easy to define such integral type, and so on. And the compiler will verify things at compile-time where possible (example: If you write a literal of a string of enumerated chars, or the second matrix, it will verify at compile-time that the values of the literal are in the specified ranges), and insert out of range tests for run-time. Such range kinds and tests don't require advanced type system features to be implemented by the Ada compilers, but they are able to catch a lot of bugs early, that in C/C++/D bite you often. In most C++ code I've seen there is not even a bit of such strong static typing of the integral values. This makes the code harder to modify, and just "int" used for ten different purposes makes it easy to use an integral variable where a totally different one was needed, this turns the C++/D code into a "soup" that's buggy and harder to debug. I don't like the carefree attitude of C-style languages regarding strong typing of integral values. I have seen that computer language features 30+ years old are able to avoid most of such troubles.
In functional languages as Haskell and F# such work on indexes and ranged values and so on is much less needed, but in high-performance Ada/C/C++/Delphi/D coding, they are used quite often.
Bye, bearophile