On Tuesday, 14 July 2015 at 13:28:44 UTC, Ola Fosheim Grøstad wrote:
On Tuesday, 14 July 2015 at 11:58:08 UTC, Laeeth Isharc wrote:
hog - maybe that is right, but in practice stuff written in C,C++,D in a sensible, mature, thoughtful fashion but without too much special effort given to optimisation seems to just run fast without a need for tuning or any dark magic. I am probably not tuning it right, but even in 2015 one doesn't seem to be impressed by the performance and memory efficiency of Java apps.

Well, programs either run fast enough or not fast enough. If it runs fast enough or you run out of money, then you're done ;).

But "objectively fast" will have to be measured up to theoretical peak throughput (which you can calculate for a CPU). Most programs are nowhere close that since you need to be very careful with the size of the working set and layout in order to preload the cache, stay within cache level 1, store full cache lines, and keep all the "math units" (ALU ports) in the CPU busy.

The more abstraction levels you have, the more difficult it is to understand what will happen in the CPU (assuming you fully understand the internals of the _specific_ CPU, which changes from generation to generation).

But there are restricted and annotated versions of C that offers provable safety at the cost of development time, but with C performance. Thus is much better than D for _secure_ performant system level programming since you also have both termination and run-time guarantees.

But commercial life is about trade-offs and pragmatic choices, and the Pareto principle applies here too. Ie I should think the subset of reasonably secure, reasonably efficient systems level programming is rather larger than the narrow domain you speak of above.

Well, either the program is correct or it isn't.

The question is whether you want to detect it (trap on overflow), pretend it didn't happen (D style wrap around), assume it should not happen (gcc/clang at high optimization level), or prevent compilation until it is guaranteed not to happen.

In some domains it is best to halt when something wrong happens (before you sell all your stock at the wrong price?), in other domains you should keep going (serving ads), in yet other domains a rare crash is ok, but shoddy performance isn't (computer game with real time ray tracing)

You have yourself suggested that if you want to use C and C++ in a safe way then it comes at quite a price.

No, I suggested that if you pick C++ over Java for rational reasons you probably would get upset if you were told to use overflow trapping ints and GC by default in C++. The C++ default is based on what most people use C++ for.

I also think UB is acceptable as long as the triggering conditions are clear, well understood, and it has a raison d'etre. Often this is allowing more aggressive optimizations.

I don't think you can always trade these sort of abstractions or compiler aid for performance. Sometimes you need to ensure security, performance, and readability/maintainability, all by yourself, the hard way, by trading in developer time. Many people are willing to do so, and many companies depend on this, because that is the only way to do it in the present, with the current hardware and/or budget constraints. Sometimes even with decades old hardware constraints - anyone familiar with the demoscene? :)

My point is that there are also many developers out there, for whom a language with no undefined behavior, and theoretically sound is not appealing at all, if those feature put any sort of barriers on getting the most out of the hardware.

I do have the feeling, perhaps wrongly, that there aren't many on these forums, though, given the direction of most discussions.

Reply via email to