Mike Jones <[EMAIL PROTECTED]> writes on  ++ efficiency

> how do I approximate its efficiency compared to adding one
> element at a time?

In a "regular" situation,   xs ++ ys  
needs the number of "steps" proportional to  (length xs).
This is according to the simplest Haskell program you could suggest 
for (++) - guess, what is this program?

> Does 1:2:3:4:5:6:7:8 execute faster than [1, 2, 3, 4] ++ [5, 6, 7, 8], 
> where the first case is executed recursively in a function?

This literally expressions the compiler is likely to evaluate at 
the compile time. They would both cost zero at the run-time.

But if at the run-time  ys = [5,6,7,8]  is "ready", after this
                                                       [1,2,3,4]++ys
costs 4 "steps",  while  1:2:3:4:5:6:7:8  costs 8 steps.
The "lazy" evaluation makes the truth of these considerations 
depend highly on the concrete situation in the program.

Also if some designer implements Haskell basing on some different 
internal model for the lists (God save!), then one could make (++) 
regularly cost 1 step ...

Generally, beware of the (++) danger in some situations.   

------------------
Sergey Mechveliani
[EMAIL PROTECTED]




Reply via email to