Gabriel Sechan wrote:
From: Christopher Smith <[EMAIL PROTECTED]>
Gabriel Sechan wrote:
And this type of negligent and quite frankly unprofessional thinking is why my Athlonx2 has problems running the same application load my pentium2 333MHz did. We write poor, slow, buggy code.

I'm gonna call bullshit on this. If it's the same code, then the performance bottleneck is clearly not the processor. If it isn't the same code, it's entirely possible that the problem is not poor, slow, buggy code, but that the apps have different functionality (you may not like the changes, but nonetheless it's not necessarily anything about the quality of code).

Of course its not the same code. But my Athlonx2 has trouble running Word, an mp3 player, and its various firewalls. My p2 333 did the same job with fewer problems. Some of this (perhaps as much as 50%) is due to feature creep. The rest is because we've thrown out the craft of efficiency and instead count on the hardware to speed things up for us. And due to this, we're going to be in a shitload of trouble in a few years- hardware has just about peaked out, and multiple cores is only going to do so much.

...and how exactly did you come up with that 50% metric? Could it be 51%? Maybe 55%? Perhaps 99%? Hard to know really.

Interestingly though, Word, your mp3 player, your various firewalls and your OS are all probably written in the same language that they were written in for your p2 333. So it begs the question of how you see this as supporting any notion of causality with the language being used. Perhaps the problems you are seeing are actually the result of continuing to use the same language for increasingly complex applications.

I'm going to call bullshit on this too. "development speed" is not the right metric, because our perceptions of what can and should be done quickly have changed substantially. More importantly, the advantage of high level languages is more that it that it simplifies code, which allows increasingly more and more complex projects to be tackled by smaller and smaller teams.

Bullshit. High level languages don't simplify code, they complicate it. Inheretence, exceptions, templating, etc are all far more complicated to understand than simple procedural code.

Umm.. "simple procedural code" and "high level language" are not in any way mutually exclusive (witness Cilk, Lua, Limbo, Erlang, etc.). You seem to be equating "high level language" with a specific abstraction strategy. In particular, the only language I can think of that has all that you're talking about would be C++, which I think most people would classify as a low level language that is indeed more complex. There are some genuine abstractions that C++ helps you with, and it is hard to argue against them vs. rolling your own: overloading (you can do it with macros or a code generator... both of which are far more error prone, and I speak from very recent experience in this regard), vtables (again, far less error prone than doing your own stuffing of function pointers in to structures), destructors (again, way the heck less error prone than having to write clean up code for each and every instance of your app that uses a resource), exceptions (yeah, painful and error prone until you learn how to do it right, but nothing compared to littering your application with setlongjump()), templates (yeah, they are complicated as hell and a pain to debug, but compared to comperable C preprocessor macros except they far more likely to catch errors at compile time), typed data structures and algorithms (check the famous error rates of people implementing binary searches before poopooing this), and typed IO routines (if you haven't screwed up your stack with some varargs mismatch, you haven't been coding in C as long as I have).

The end result is still a horribly complex language, but that is entirely attributable to the fact that C++ *by design* tries to fully maintain the same level of abstraction as C. So each abstraction they add has to figure out a way to interact with the existing C language, and the result is unsurprisingly increased complexity. Now, if you master that language complexity you can write cleaner and simpler code, but the far easier and effective path (and the one that has been advocated) is to work with a language that is working at a higher abstraction.

Only in rare occasions does this complication actually come with a
> commesurate performance increase. You may end up doing more per
> line, but that does not mean simplification-  it means more points
> of failure and more difficulty debugging.

It doesn't mean more points of failure if the language is taking care of these problems for you. You don't have to write a heap defragmenter if the language is doing that for you. As a consequence, it's not possible for you to have created a bug in your heap defragmenter and furthermore the language undoubtedly sheltered you from a lot of the possible problematic interactions between your code and the defragmenter. Sure, it's possible the language implementor introduced a bug in their implementation, but a) that's not your code and b) the same could be said of any library in C.

Just to take the discussion that started this thread as the simplest example. In C, it's entirely possible to have any number of issues around linking object code that just doesn't exist in most high-level languages, because they define a simple, platform agnostic interface for linkage. There's a trade off in performance, though it goes both ways (binding tends to be more expensive, but it becomes far easier to do certain optimizations like runtime inlining), but the real win is that you don't have to spend any of your time dealing with these linkage problems, which for most cases really shouldn't be any part of the problem that a developer is working on.

--Chris

--
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-lpsg

Reply via email to