On 12/16/13 5:35 AM, "Ola Fosheim Grøstad" <ola.fosheim.grostad+dl...@gmail.com>" wrote:
On Wednesday, 11 December 2013 at 00:19:50 UTC, H. S. Teoh wrote:
hardware. So arguably, no matter what code fragment you may present in
C++ or D, there's always a corresponding C code fragment that performs
equally fast or faster.

Yes, but the unix/C-way is to have many simple programs that work
together to make complex systems. C is meant to be used with makefiles
and source-generating tools such as Lex, Ragel and even the
C-preprocessor. C++ and D claim to be self-sufficient. C was never meant
to be, it was meant to be part of the unix eco-system. What you do with
templates and and compile-time-expressions in C++/D is what you do with
the ecosystem of tools in C. Therefore a comparison between C and C++/D
should include that C-ecosystem. If people don't like
sourcecode-generating tools, fine, but that is the unix/C way of
programming and it should be included when assessing the power of C
versus C++/D (and their template libraries).

But that obscures the fact that said C  code
fragment may be written in an unmanageably convoluted style that no
one in their right mind would actually use in practice.

Well, C-programmers do, if they have tools that generate that convoluted
style from a readable input file (like lex).

but that proves nothing since the whole issue is writing *idiomatic* C
vs. *idiomatic* D, not writing things in an unnatural way just so you
can lay claim to the title of best performance.)

Exactly, and idiomatic C is to use source-generating tools. Just about
all medium to large size C projects use such tools that go beyond the
C-preprocessor (which conceptually is a separate tool that is optional
in theory).

Nonsense. Using extralinguistic tools including code generators is not the exclusive appurtenance of C. Any large project uses some for various purposes. Needless to say, extralinguistic generators often compare poorly with language-integrated solutions. Look where the preprocessor has taken C - it's compromised the entire notion of preprocessing. And m4, more powerful and supposedly better, has only spawned more madness.

Anyway, one cannot discuss performance without discussing the target.
Much of the stuff in C makes sense on memory-constrained hardware, even
C-strings are great when you want to conserve memory and have
hardware-support for 0-guarded strings (string-instructions that will
stop on 0).  And, JITed regexps won't work on mobile platforms or
platforms that require signed code.

We are now getting a new range of memory constrained hardware,
transputer-like processsors with many simple cores with fast local
memory and a saturated link to main memory. So the memory-efficent way
of getting performance is still highly relevant.

Performance is always contextual. E.g. I think OpenCL is just an
intermediate step to getting performance, compilers will soon have to
emit co-processor friendly code automagically and languages will have to
provide constructs that makes that happen in the most efficient way. So
if C is out-dated then so are all other languages in current use… ;-)

Current applications also demand good modeling power. The days when one thousand lines was a nontrivial program are behind us. The right solution is a language that combines performance control with the modeling required by large applications.


Andrei

Reply via email to