Frank Atanassow <[EMAIL PROTECTED]> wrote,
> Manuel M. T. Chakravarty writes:
> > Florian Hars <[EMAIL PROTECTED]> wrote,
> >
> > > To cite the comments of Olof Torgersson on Haskell
> > > (www.cs.chalmers.se/pub/users/oloft/Papers/wm96/wm96.html):
> > >
> > > As a consequence the programmer loses control of what is really
> > > going on and may need special tools like heap-profilers to find out
> > > why a program consumes memory in an unexpected way. To fix the
> > > behavior of programs the programmer may be forced to rewrite the
> > > declarative description of the problem in some way better suited to
> > > the particular underlying implementation. Thus, an important
> > > feature of declarative programming may be lost -- the programmer
> > > does not only have to be concerned with how a program is executed
> > > but has to understand a model that is difficult to understand and
> > > very different from the intuitive understanding of the program.
> >
> > What's the problem with using a heap profiler? Sure, heap
> > usage can get out of hand when writing a straight forward
> > maximally declarative (whatever that means) version of a
> > program. Is this a problem? No.
>
> One might argue analagously that C/C++ programmers ought to use tools like
> proof assistants and model checkers to help reason about
> the correctness of their programs. The fact is that, when
> we talk about the operational behavior or denotational
> semantics of a program, any reasoning "tool" which is _in_
> the language is preferable to a tool which is _outside_
> the language.
>
> Thus, when correctness is the more important, it is better
> to use Haskell, the language constructs of which encourage
> (partial) correctness, than it is to use a C + a verifier;
> and when efficiency is more important, it is better to use
> C, the language constructs of which encourage efficiency,
> rather than Haskell + profiling tools.
I don't completely agree, because I think the situation is
not as symmetric as you pciture it. Of course, all
languages have their strengths and I personally have not the
slightest problem coding in C when I think C is more
appropriate for the job than Haskell.
However, I would claim that in your average program, rapid
prototyping and correctness first are winning over
efficiency first. I think that this position is backed by
current software engineering principles.
> > The trick about a language like Haskell is that you get to a
> > working prototype of your program extremely quickly. Then,
> > you can improve performance and, eg, use a heap profiler to
> > find the weak spots and rewrite them. And yes, then, I
> > usually lose the pure high-level view of my program in
> > theses places in the code.
>
> I would rather not lose it at all, by expressing the operational behavior
> which I demand in a declarative fashion. This is, IMO, part of the reason why
> notions such as monads (and linearity) are valuable: they give you some
> measure of control over operational details without sacrificing the "pure
> high-level view".
Nevertheless, optimising a program usually means imposing
more and more control taking into account operational
aspects of the program. I am all for expressing this
additional control as nicely as possible, but ultimately it
means breaking abstraction levels.
> > This is still a *lot* better then not having the high-level
> > view at all and taking much longer to get the first working
> > version of my program (even if it is already more efficient
> > at that point). I don't know about you, but I rather have
> > 90% of my program in a nice declarative style and rewrite
> > 10% later to make it efficient, then having a mess
> > throughout 100% of the program.
>
> A problem, as I see it, is that, although compilers like GHC do an admirable
> job of optimizing away the overhead of laziness, etc. in many circumstances,
> you can't be sure. You can use profiling tools and such after writing
> my code, but sometimes the changes that need to be made can be far-reaching,
> and you'll wish you had short-circuited this cycle in the first place.
I agree. At some point a developer with a good intuition of
where the hot spots of the program will probably be and a
forward thinking design will always win. But, I think, this
is true for every language.
> A lot of people who program extensively in Haskell have a good
> understanding of the underlying abstract machine, and can anticipate the
> changes that need to be made, or at least ensure that they are
> localized. For example, they know "tricks" like ways to increase laziness;
> when I see messages like Florian's, I am reminded that such tricks can be
> relatively obscure.
>
> In short, the problem is that in a language such as Haskell without a
> high-level abstract operational semantics, you don't see
> all the benefits of a declarative style, since, as far as
> efficiency goes, we have only replaced the conventional
> programmer's mental model of a traditional machine
> architecture, with a more esoteric model which is some
> fuzzy idea of a lazy abstract machine. The abstract
> machine may be simpler, but you still can't reason about
> it in a formal way if it's not a part of the standard. The
> best you can do is go off and read some papers, written by
> researchers for researchers, which describe a lazy
> machine, and take a guess that your particular
> implementation uses much the same thing. And hardly anyone
> will bother to go that far.
It is definitely true that this is a neglected area and that
on a much more basic level not much effort has been spent on
producing documentation that makes it easy for a casual
Haskell programmer to gain an intuition of the various
methods for improving efficiency.
Cheers,
Manuel