[Haskell-cafe] Re: Problems with strictness analysis?

2008-11-06 Thread Achim Schneider
wren ng thornton <[EMAIL PROTECTED]> wrote:

> If Haskell had strictness annotations as part of the type system,
> then there might be room for progress. We could imagine constructing
> separate polymorphic bodies for isum, one for each strictness variant
> of (+). Then, when isum is instantiated at types which define strict
> (+) we could use an optimized version of isum that forces evaluation
> at each step. Now the trick becomes how to control the explosion of
> generated code for all the combinatorially many strictness variants
> of types. For whole-program optimizing compilers, it should be
> relatively easy to keep the code bloat down, but for separate
> compilation it's not so straightforward. Chances are that any
> solution along this route would still require strictness annotations
> in order to do the right thing, only now we've lifted the annotations
> into the type language instead of leaving them in the term language.
>
Code explosion is actually what we want, as things like (+) are best if
unboxed and inlined, to one of the thousands of machine instructions
the target architecture offers for that operation (think x86, amd64,
x87, mmx, sse, sse2...).

As a side note, you made me think about y-combinators, stream fusion
and lists of computations that can be rewritten to use the minimum
number of y's possible/computable. It appears to have a lot in common
with strictness/eagerness if you eg. think of [n..] as a fixpoint of the
form y (\num_calls -> num_calls + n). FP semanticists will now cry
"Heresy! Repeat after me: Pure is pure and impure is impure, always
and ever", so I rather shut up and think before I make even less sense. 


-- 
(c) this sig last receiving data processing entity. Inspect headers
for copyright history. All rights reserved. Copying, hiring, renting,
performance and/or quoting of this signature prohibited.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Problems with strictness analysis?

2008-11-06 Thread wren ng thornton

Dominic Steinitz wrote:

wren ng thornton  freegeek.org> writes:

[snick]

> > isum 0 s = s
> > isum n s = isum (n-1) (s+n)
> This is tail recursive, and will be optimized to an iterative loop; 


[snick]

> In terms of having a compiler 'smart enough', it's not clear that 
> functions of this sort ought to be inferred strict simply because the 
> accumulator is ultimately returned to the caller. Consider for example:


I thought this was strict as Luke Palmer has already pointed out. My 
understanding is that a compiler may be able to infer it is strict and then 
perform eager evaluation.


But how should a compiler infer that it's strict? That's the trick. It's 
easy enough to come up with examples that would foil any naive attempt 
at inference ---like the (++) and (.) examples--- and it's not entirely 
clear that a general solution is possible. GHC's strictness analyzer 
will capture many things, but it's conservative enough to ensure that it 
doesn't change the semantics of the program. Strictness annotations are 
about declaring the semantics of the program, so any optimizations done 
by the compiler must ensure that they don't change the strictness 
properties of the original program.[1]


Even for this particular example, it's not clear that making isum strict 
in the accumulator is correct. The issue here is that isum does not have 
a type annotation and so it's polymorphic in (+). For the Int and 
Integer instances it happens that (+) is strict, but we could also have 
a Num instance for Peano integers or some other lazy representation, and 
this function should work on them as well--- preserving the 
least-strictness of the (+) operator for those types. For strict (+) 
it's true that isum n _|_ == _|_, but for any (+) that's lazy in its 
first argument that's not true. S(S(S _|_)) is a perfectly valid Peano 
integer and forbidding it as a return value from this function would 
alter the semantics from what was written in the definition.


If Haskell had strictness annotations as part of the type system, then 
there might be room for progress. We could imagine constructing separate 
polymorphic bodies for isum, one for each strictness variant of (+). 
Then, when isum is instantiated at types which define strict (+) we 
could use an optimized version of isum that forces evaluation at each 
step. Now the trick becomes how to control the explosion of generated 
code for all the combinatorially many strictness variants of types. For 
whole-program optimizing compilers, it should be relatively easy to keep 
the code bloat down, but for separate compilation it's not so 
straightforward. Chances are that any solution along this route would 
still require strictness annotations in order to do the right thing, 
only now we've lifted the annotations into the type language instead of 
leaving them in the term language.




>   > f 0 xs = xs
>   > f n xs = f (n-1) (replicate n n ++ xs)
>
> Since (++) can indeed return partial answers, it's fine for the 
> accumulator to be lazy. Indeed, making it strict harms performance 
> significantly. Another example is when the accumulator is a function, as 

Can this function be strict if (++)isn't? And if it isn't strict, why would it 
make sense to evaluate it eagerly?


Depends what you mean by "strict". Adding ($!) before the accumulator 
will force each (++) thunk to WHNF, which will in turn force the 
replicate thunk to WHNF. Additional annotations could make the entire 
spine strict, or all the elements strict as well. Everything *can* be 
made strict, whether it ought to be is a separate matter entirely.


The point was that a simple heuristic like detecting accumulator 
patterns and making the accumulators strict is not always a good idea. 
Adding that ($!) to the function will double the evaluation time and 
makes no semantic difference. It *doesn't* make sense to evaluate the 
accumulator eagerly, because you'll still have a bunch of thunks, 
they're just pushed back one element in the list. The only way for 
strictness to alter the semantics of this particular function is if the 
entire spine of the second argument is forced (which in turn makes it 
take time quadratic in the length of the return value, not to mention 
the extra space for the whole list).



PS This subject seems to come up often enough to be worth a wiki entry (maybe 
there already is one). I think we should also be careful with terminology (as 
Luke Palmer and David Menendez have pointed out. Maybe that could be included 
in the wiki entry.


It would be worth a wiki. In addition to the strict/non-strict vs 
eager/lazy distinction, it's also important to distinguish unlifted 
types from unboxed types (which both come up in this sort of discussion).




[1] So for example, optimizations like those in 
 
are not the sorts of things which it would be correct for a compiler to 
perform 

Re: [Haskell-cafe] Re: Problems with strictness analysis?

2008-11-05 Thread wren ng thornton

Luke Palmer wrote:

The example being discussed in this thread is a good one:

  sum [1..10^8] + length [1..10^8]

With Haskell's semantics, we know we can write this as:

  let xs = [1..10^8] in sum xs + length xs

But it causes a change in memory *complexity*, so in some sense these
two sentences are not equal.  What is the theory in which we can
observe this fact?  Is it possible to give a simple denotational
semantics which captures it?


There's actually been a good deal of work on trying to mathematize this 
sort of issue, under the name of linear logic[1]. The problem with 
classical equational reasoning is that it doesn't capture the notion of 
"resources" or the "sharing" thereof. The two expressions are not the 
same because the first has two [1..10^8] resources it can use up, 
whereas the later only has one. Depending on the balance between sharing 
 temporal work vs minimizing memory overhead, sometimes it's okay to 
add sharing and other times it's okay to remove it, but in general both 
options are not available at once.


It's pretty easy to capture issues of economy with LL, though making a 
rewriting system takes a bit more framework. I'm not sure to what extent 
LL has been explored as a semantic model for programs, but Clean[2] and 
Phil Wadler[3] have certainly done very similar work.



[1] 
[2] 
[3] 

--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Problems with strictness analysis?

2008-11-05 Thread Claus Reinke

  equational /= denotational


Nonetheless, Haskell has equational semantics which are derived from
its denotational ones.  But when I said "equational semantics" I
really meant something more like "equations" :-).


Unlike Standard ML, for instance, Haskell does not have standard
semantics - a persistent embarrassment. There are semantics for fragments,
but no guarantee that any of them will match any part of the language, let 
alone its implementations.


One can base equational semantics on denotational semantics, but that is 
not the only way, hence the two are not equal. I was trying to point out
the missing part, where equations are are useful for operational reasoning, 
beyond simple denotations.



Sometimes, operational semantics based
on directed equations give you the best of both worlds: equational reasoning
and behavioural guidelines, both at a suitably "clean" and abstract level.


By directed equations you mean unidirectional rewrite rules?


Yes. Think of rewriting (replacing old with new) as a general form 
of operational semantics.Within this form, there is a wide range of 
possibilities, including rewriting abstract machine states and rewriting

source-level programs. Somewhere near the latter, there is a "most
abstract" form of operational semantics for a language, one for which
every other form adds details that are not inherent in the language, but
are artifacts of the formalisation (details of the abstract machine, or
details of the mathematical domain, or ..).


But I'd like to back up a bit.  I may have been too quick to assign
the judgement "dirty" to operational thinking.  I am not asking this
question as a complaint, as a challenge on the language, or as any
other such rhetorical device: my question is earnest.  I would like to
know or to develop a way to allow abstract analysis of time and space
complexity.


So you want to reason about the way from the original program to its
outputs, not just about the outputs. You can cling to denotations, by 
extending mere values with information about the computation leading

to those values, but why not reason about the computation directly.

In logic terms, you want to talk about the proof, not just about the
truth of your theorems. In functional language terms, you want to talk
about reductions, not just about normal forms. This works well for
pure lambda calculus, and has been extended to cover other aspects
of Haskell implementations, such as call-by-need lambda calculus as
a way for evaluation of non-strict programs with sharing.

   The call-by-need lambda calculus
   John Maraist and Martin Odersky and Philip Wadler
   Journal of Functional Programming, 1998
   http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.44.5259

The idea is to use "let" to represent sharing, and to refine the reduction
rules to take this into account: instead of substituting parameters into
function bodies, parameter bindings are kept in "let"-bindings, where
their evaluation is shared (only values can be substituted, so substitution
is preceded by evaluation, when needed).

The resulting reductions are quite a bit more involved than without
sharing, because all reductions take place within a context (those
"let"-bindings). But that is quite close to what happens in a compiled
graph reduction implementation: those "let"-bindings represent the
heap, the nested contexts correspond to stack.

(there are of course optimizations and representation changes that affect 
performance, but the former are themselves expressed as rewrite rules,
and the latter can be accounted for by refining the rewrite system, when 
such details are needed - luckily, that isn't often the case)



On my time working with Haskell, I've developed a pretty accurate
"intuition" about the performance of programs.  It is some meld of
thinking in terms of the lazy evaluation mechanism, some higher-level
rules of thumb about when new thunks are allocated, and probably some
other stuff hiding in the depths of my brain.  I know it, but I am not
satisfied with it, because I can't formalize it.  I wouldn't be able
to write them down and explain to a fellow mathematician how I reason
about Haskell programs.

The example being discussed in this thread is a good one:

 sum [1..10^8] + length [1..10^8]

With Haskell's semantics, we know we can write this as:

 let xs = [1..10^8] in sum xs + length xs

But it causes a change in memory *complexity*, so in some sense these
two sentences are not equal.  What is the theory in which we can
observe this fact?  Is it possible to give a simple denotational
semantics which captures it?


Why do you insist on denotational semantics for reasoning about evaluation?
Denotational semantics are best when all you care about are values.

Have a look at that paper, or the earlier version

   http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.39.2287

There is also

   John Launchbury, A Natural Semantics for Lazy Evaluation,1993
   http://citeseerx.ist.psu.e

Re[2]: [Haskell-cafe] Re: Problems with strictness analysis?

2008-11-05 Thread Bulat Ziganshin
Hello Luke,

Thursday, November 6, 2008, 2:34:36 AM, you wrote:

> The example being discussed in this thread is a good one:

>   sum [1..10^8] + length [1..10^8]

> With Haskell's semantics, we know we can write this as:

>   let xs = [1..10^8] in sum xs + length xs

> But it causes a change in memory *complexity*, so in some sense these
> two sentences are not equal.  What is the theory in which we can
> observe this fact?  Is it possible to give a simple denotational
> semantics which captures it?

i'm not a mathematician, but why you can't explore term rewriting
system? it has some graph at initial stage and modifies it until normal
form is reached. graphs representing first and second expression are
different (first is tree while second have two two links to [1..10^8]
node) and this oes to difference between their evaluation process. on
the every step of rewriting of first graph itssize remains O(1), while
second graph during rewriting grows up to O(n) size


-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Problems with strictness analysis?

2008-11-05 Thread Luke Palmer
On Wed, Nov 5, 2008 at 1:24 AM, Claus Reinke <[EMAIL PROTECTED]> wrote:
>> How can we support analysis of time and space complexity without
>> expanding ourselves into the complicated dirty world of operational
>> thinking?
>
>   equational /= denotational

Nonetheless, Haskell has equational semantics which are derived from
its denotational ones.  But when I said "equational semantics" I
really meant something more like "equations" :-).

>   operational /= bad
>
> Sometimes, denotational is too abstract, offering no guidelines on
> behaviour, just as abstract machine based operational thinking is too
> concrete, hiding
> the insights in a flood of details. Sometimes, operational semantics based
> on directed equations give you the best of both worlds: equational reasoning
> and behavioural guidelines, both at a suitably "clean" and abstract level.

By directed equations you mean unidirectional rewrite rules?

But I'd like to back up a bit.  I may have been too quick to assign
the judgement "dirty" to operational thinking.  I am not asking this
question as a complaint, as a challenge on the language, or as any
other such rhetorical device: my question is earnest.  I would like to
know or to develop a way to allow abstract analysis of time and space
complexity.

On my time working with Haskell, I've developed a pretty accurate
"intuition" about the performance of programs.  It is some meld of
thinking in terms of the lazy evaluation mechanism, some higher-level
rules of thumb about when new thunks are allocated, and probably some
other stuff hiding in the depths of my brain.  I know it, but I am not
satisfied with it, because I can't formalize it.  I wouldn't be able
to write them down and explain to a fellow mathematician how I reason
about Haskell programs.

The example being discussed in this thread is a good one:

  sum [1..10^8] + length [1..10^8]

With Haskell's semantics, we know we can write this as:

  let xs = [1..10^8] in sum xs + length xs

But it causes a change in memory *complexity*, so in some sense these
two sentences are not equal.  What is the theory in which we can
observe this fact?  Is it possible to give a simple denotational
semantics which captures it?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Problems with strictness analysis?

2008-11-05 Thread Achim Schneider
Henning Thielemann <[EMAIL PROTECTED]> wrote:

> Achim Schneider schrieb:
> > Henning Thielemann <[EMAIL PROTECTED]> wrote:
> >>
> >> There was
> >>   http://www.haskell.org/haskellwiki/Things_to_avoid
> >>
> >> which has been renamed to the more friendly
> >>   "Haskell programming tips"
> >>
> > I was rather thinking of a list of performance pitfalls and laziness
> > tarpits, starting with the all-famous
> > 
> > avg xs = sum xs + length xs
> 
> 
> (/) instead of (+) ?
> 
Only for sufficient correct definitions of avg.


-- 
(c) this sig last receiving data processing entity. Inspect headers
for copyright history. All rights reserved. Copying, hiring, renting,
performance and/or quoting of this signature prohibited.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Problems with strictness analysis?

2008-11-05 Thread Achim Schneider
"Luke Palmer" <[EMAIL PROTECTED]> wrote:

> On Wed, Nov 5, 2008 at 4:33 AM, Achim Schneider <[EMAIL PROTECTED]>
> wrote:
> > I know that I, coming from a C/Scheme POV, in the beginning
> > attributed much magic to Haskell's inner workings and assumed,
> > because of the general wizardly air of the whole language, avg to
> > run in O(n) time and constant resp. O(n) space.
> 
> Haskell's great strength is its equational semantics.  I would like
> Haskell programmers to think equationally, mathematically, rather than
> operationally, when writing Haskell programs.  If I were to teach a
> course in Haskell, I would like to launch off of denotational
> semantics, hopefully without ever having to say "lazy evaluation".
> (It's a pipe dream, of course, but you get the idea)
> 
> However, this strength is also a weakness when we want to analyze the
> efficiency of programs.  The denotational semantics of Haskell say
> nothing about time or space complexity, and give us no tools to reason
> about it.  A Haskell interpreter which randomly rewrites terms until
> it happens upon a normal form is considered correct (or rather,
> "correct with probability 1" :-).
> 
> How can we support analysis of time and space complexity without
> expanding ourselves into the complicated dirty world of operational
> thinking?
> 
You can't clean a student's mind from "bad, dirty operational thoughts"
by not talking about it as much as you can't exterminate the human race
by discontinuing sexual education. Fumbling the keyboard and making
things go "blink" and "swoosh" is just too much fun.

A natural understanding of what's "clean, elegant and fun" develops
over time, and students have to rub their nose into gory and dirty
details and code until it bleeds before they see what all that
abstract nonsense is good for: Not so much to formalise thinking, but
to enable one to speak axiomatically, just like one thinks.  

-- 
(c) this sig last receiving data processing entity. Inspect headers
for copyright history. All rights reserved. Copying, hiring, renting,
performance and/or quoting of this signature prohibited.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Problems with strictness analysis?

2008-11-05 Thread Henning Thielemann
Achim Schneider schrieb:
> Henning Thielemann <[EMAIL PROTECTED]> wrote:
>>
>> There was
>>   http://www.haskell.org/haskellwiki/Things_to_avoid
>>
>> which has been renamed to the more friendly
>>   "Haskell programming tips"
>>
> I was rather thinking of a list of performance pitfalls and laziness
> tarpits, starting with the all-famous
> 
> avg xs = sum xs + length xs


(/) instead of (+) ?

In the old hawiki there was an article about that particular example ...
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Problems with strictness analysis?

2008-11-05 Thread Claus Reinke

Haskell's great strength is its equational semantics.  I would like
Haskell programmers to think equationally, mathematically, rather than
operationally, when writing Haskell programs.  If I were to teach a
course in Haskell, I would like to launch off of denotational
semantics, hopefully without ever having to say "lazy evaluation".
(It's a pipe dream, of course, but you get the idea)

However, this strength is also a weakness when we want to analyze the
efficiency of programs.  The denotational semantics of Haskell say
nothing about time or space complexity, and give us no tools to reason
about it.  A Haskell interpreter which randomly rewrites terms until
it happens upon a normal form is considered correct (or rather,
"correct with probability 1" :-).

How can we support analysis of time and space complexity without
expanding ourselves into the complicated dirty world of operational
thinking?


   equational /= denotational
   operational /= bad

Sometimes, denotational is too abstract, offering no guidelines on behaviour, 
just as abstract machine based operational thinking is too concrete, hiding
the insights in a flood of details. Sometimes, operational semantics based 
on directed equations give you the best of both worlds: equational reasoning 
and behavioural guidelines, both at a suitably "clean" and abstract level.


Claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Problems with strictness analysis?

2008-11-05 Thread Luke Palmer
On Wed, Nov 5, 2008 at 4:33 AM, Achim Schneider <[EMAIL PROTECTED]> wrote:
> I know that I, coming from a C/Scheme POV, in the beginning attributed
> much magic to Haskell's inner workings and assumed, because of the
> general wizardly air of the whole language, avg to run in O(n) time and
> constant resp. O(n) space.

Haskell's great strength is its equational semantics.  I would like
Haskell programmers to think equationally, mathematically, rather than
operationally, when writing Haskell programs.  If I were to teach a
course in Haskell, I would like to launch off of denotational
semantics, hopefully without ever having to say "lazy evaluation".
(It's a pipe dream, of course, but you get the idea)

However, this strength is also a weakness when we want to analyze the
efficiency of programs.  The denotational semantics of Haskell say
nothing about time or space complexity, and give us no tools to reason
about it.  A Haskell interpreter which randomly rewrites terms until
it happens upon a normal form is considered correct (or rather,
"correct with probability 1" :-).

How can we support analysis of time and space complexity without
expanding ourselves into the complicated dirty world of operational
thinking?

Luke
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Problems with strictness analysis?

2008-11-05 Thread Achim Schneider
Henning Thielemann <[EMAIL PROTECTED]> wrote:

> Achim Schneider schrieb:
> > Derek Elkins <[EMAIL PROTECTED]> wrote:
> > 
> >> well-known anti-patterns
> >>
> > I'm wondering why there are so miraculously well-known. Could it be
> > the dedicated wiki page titled "DONTDOTHAT!!!one!" that lists them
> > all?
> 
> 
> There was
>   http://www.haskell.org/haskellwiki/Things_to_avoid
> 
> which has been renamed to the more friendly
>   "Haskell programming tips"
>
I was rather thinking of a list of performance pitfalls and laziness
tarpits, starting with the all-famous

avg xs = sum xs + length xs

The above link seems to consist purely of advice about style and how to
avoid imperative thinking, and does not do much to take the fear out of
FP and laziness I commonly notice in e.g. C++ programmers: Seeing
Haskell, they just don't know what the heck is going on. A list of such
things like avg above and stuff like

lastInt = last [1..]

and explanations on why and how they work (and maybe still don't work)
would surely be helpful for people who don't intend or don't even
start to consider to read into SPJ's GHC papers.

I know that I, coming from a C/Scheme POV, in the beginning attributed
much magic to Haskell's inner workings and assumed, because of the
general wizardly air of the whole language, avg to run in O(n) time and
constant resp. O(n) space.

-- 
(c) this sig last receiving data processing entity. Inspect headers
for copyright history. All rights reserved. Copying, hiring, renting,
performance and/or quoting of this signature prohibited.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Problems with strictness analysis?

2008-11-05 Thread Henning Thielemann
Achim Schneider schrieb:
> Derek Elkins <[EMAIL PROTECTED]> wrote:
> 
>> well-known anti-patterns
>>
> I'm wondering why there are so miraculously well-known. Could it be the
> dedicated wiki page titled "DONTDOTHAT!!!one!" that lists them
> all?


There was
  http://www.haskell.org/haskellwiki/Things_to_avoid

which has been renamed to the more friendly
  "Haskell programming tips"
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Problems with strictness analysis?

2008-11-05 Thread Achim Schneider
Derek Elkins <[EMAIL PROTECTED]> wrote:

> well-known anti-patterns
>
I'm wondering why there are so miraculously well-known. Could it be the
dedicated wiki page titled "DONTDOTHAT!!!one!" that lists them
all?

-- 
(c) this sig last receiving data processing entity. Inspect headers
for copyright history. All rights reserved. Copying, hiring, renting,
performance and/or quoting of this signature prohibited.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Problems with strictness analysis?

2008-11-04 Thread Dominic Steinitz
wren ng thornton  freegeek.org> writes:

[snick]

> > isum 0 s = s
> > isum n s = isum (n-1) (s+n)
> 
> This is tail recursive, and will be optimized to an iterative loop; 

[snick]

> 
> In terms of having a compiler 'smart enough', it's not clear that 
> functions of this sort ought to be inferred strict simply because the 
> accumulator is ultimately returned to the caller. Consider for example:

I thought this was strict as Luke Palmer has already pointed out. My 
understanding is that a compiler may be able to infer it is strict and then 
perform eager evaluation.

> 
>   > f 0 xs = xs
>   > f n xs = f (n-1) (replicate n n ++ xs)
> 
> Since (++) can indeed return partial answers, it's fine for the 
> accumulator to be lazy. Indeed, making it strict harms performance 
> significantly. Another example is when the accumulator is a function, as 

Can this function be strict if (++)isn't? And if it isn't strict, why would it 
make sense to evaluate it eagerly?

Dominic.

PS This subject seems to come up often enough to be worth a wiki entry (maybe 
there already is one). I think we should also be careful with terminology (as 
Luke Palmer and David Menendez have pointed out. Maybe that could be included 
in the wiki entry.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe