andrewcoppin:
> Brandon S. Allbery KF8NH wrote:
> >
> >On 2008 May 13, at 17:01, Andrew Coppin wrote:
> >
> >>That definition of mean is wrong because it traverses the list twice.
> >>(Curiosity: would traversing it twice in parallel work any better?)
> >>As for the folds - I always *always* mix
Brandon S. Allbery KF8NH wrote:
On 2008 May 13, at 17:01, Andrew Coppin wrote:
That definition of mean is wrong because it traverses the list twice.
(Curiosity: would traversing it twice in parallel work any better?)
As for the folds - I always *always* mix up
It might work "better" but you
On 2008 May 13, at 17:01, Andrew Coppin wrote:
That definition of mean is wrong because it traverses the list
twice. (Curiosity: would traversing it twice in parallel work any
better?) As for the folds - I always *always* mix up
It might work "better" but you're still wasting a core that c
Don Stewart wrote:
Andrew, would you say you understand the original problem of why
mean xs = sum xs / fromIntegral (length xs)
was a bad idea now? Or why the left folds were a better solution?
That definition of mean is wrong because it traverses the list twice.
(Curiosity: would tra
Hello Don,
Wednesday, May 14, 2008, 12:34:07 AM, you wrote:
>> high-level understanding of what's going on. The STG paper isn't a good
>> way to get that high-level overview.
> Andrew, would you say you understand the original problem of why
> mean xs = sum xs / fromIntegral (length xs)
>
On Tue, May 13, 2008 at 4:30 PM, Andrew Coppin
<[EMAIL PROTECTED]> wrote:
> Well, I'm the sort of contrary person who reads random papers like that
> just for the fun of it. But when somebody says something like this, I don't
> think "ooo, that's scary", I think "ooo, somebody really ought to sit
andrewcoppin:
> Darrin Thompson wrote:
> >These "tricks" going into Real World Haskell?
>
> Seconded.
>
> >When you say someone
> >needs to get familiar with the "STG paper" it scares me (a beginner)
> >off a little, an I've been making an effort to approach the papers.
>
> Well, I'm the sort of
Darrin Thompson wrote:
These "tricks" going into Real World Haskell?
Seconded.
When you say someone
needs to get familiar with the "STG paper" it scares me (a beginner)
off a little, an I've been making an effort to approach the papers.
Well, I'm the sort of contrary person who reads random
dons:
> jeff.polakow:
> >Hello,
> >
> >> For example, the natural and naive way to write Andrew's "mean"
> > function
> >> doesn't involve tuples at all: simply tail recurse with two accumulator
> >> parameters, and compute the mean at the end. GHC's strictness analyser
> >>
Jeff Polakow wrote:
Then, I immediately blow my stack if I try something like:
mean [1..10].
The culprit is actually sum which is defined in the base libraries as
either a foldl or a direct recursion depending on a compiler flag. In
either case, the code is not strict enough; jus
jeff.polakow:
>Hello,
>
>> For example, the natural and naive way to write Andrew's "mean" function
>> doesn't involve tuples at all: simply tail recurse with two accumulator
>> parameters, and compute the mean at the end. GHC's strictness analyser
>> does the right thing with
On Tuesday 13 May 2008, Jeff Polakow wrote:
> Is this the code you mean?
>
> meanNat = go 0 0 where
> go s n [] = s / n
> go s n (x:xs) = go (s+x) (n+1) xs
>
> If so, bang patterns are still required bang patterns in ghc-6.8.2 to run
> in constant memory:
>
> meanNat = go 0
Hello,
> > $ pointfree "\xs -> foldl' (+) 0 xs / fromIntegral (length xs)"
> > ap ((/) . foldl' (+) 0) (fromIntegral . length)
>
This will have the same space usage as the pointed version. You can see
this by looking at the monad instance for ((->) r):
instance Monad ((->) r) where
Hello,
> For example, the natural and naive way to write Andrew's "mean" function
> doesn't involve tuples at all: simply tail recurse with two accumulator
> parameters, and compute the mean at the end. GHC's strictness analyser
> does the right thing with this, so there's no need for seq, $!, or
Darrin Thompson wrote:
> These "tricks" going into Real World Haskell?
Some will, yes.
For example, the natural and naive way to write Andrew's "mean" function
doesn't involve tuples at all: simply tail recurse with two accumulator
parameters, and compute the mean at the end. GHC's strictness a
On Tue, May 13, 2008 at 12:48 PM, Paul Johnson <[EMAIL PROTECTED]> wrote:
> > $ pointfree "\xs -> foldl' (+) 0 xs / fromIntegral (length xs)"
> > ap ((/) . foldl' (+) 0) (fromIntegral . length)
>
> But when I try this in GHCi 6.8.2 I get:
>
> > Prelude Data.List Control.Monad> let mean2 = ap ((
Jeff Polakow wrote:
[...] This can be easily fixed by defining a suitable strict sum:
sum' = foldl' (+) 0
and now sum' has constant space. We could try to redefine mean using
sum':
mean1 xs = sum' xs / fromIntegral (length xs)
but this still gobbles up memory. The reason is that xs
On Tue, May 13, 2008 at 2:20 AM, Don Stewart <[EMAIL PROTECTED]> wrote:
> Note the use of strict pairs. Key to ensuring the accumulators end up in
> registers.The performance difference here is due to fold (and all left
> folds) not fusing in normal build/foldr fusion.
>
> The vector versi
On 2008 May 12, at 22:18, Jeff Polakow wrote:
Then, I immediately blow my stack if I try something like:
mean [1..10].
The culprit is actually sum which is defined in the base libraries
as either a foldl or a direct recursion depending on a compiler
flag. In either case, the c
I don't know why, but perhaps beginners may expect too much from the
laziness, almost to the level of magic (me too, in the beginning!). In an
eager language, a function like
mean :: (Fractional a) => [a] -> a
expects the *whole* list before it can calculate the mean, and the question
of the 'mea
gale:
> Andrew Coppin wrote:
> > I offer up the following example:
> >
> > mean xs = sum xs / length xs
> >
> > Now try, say, "mean [1.. 1e9]", and watch GHC eat several GB of RAM. (!!)
> >
> > If we now rearrange this to
> >
> > mean = (\(s,n) -> s / n) . foldr (\x (s,n) -> let s' = s+x; n'
On Mon, May 12, 2008 at 08:01:53PM +0100, Andrew Coppin wrote:
> I offer up the following example:
>
> mean xs = sum xs / length xs
>
> Now try, say, "mean [1.. 1e9]", and watch GHC eat several GB of RAM. (!!)
I don't see why the performance implications of this program are surprising.
Just ask a
Advanced technology ought to look like unpredictable magic.
My experience with lazy evaluation is such that every time a program is
slower or bulkier than I presumed, it is not arbitrariness, it is
something new to learn.
My experience with GHC is such that every surprise it gives me is a
pl
Hello,
> I offer up the following example:
>
This is an instructive example.
> mean xs = sum xs / length xs
>
In order to type-check, I actually need to write something like:
mean xs = sum xs / fromIntegral (length xs)
There are other ways of get the numeric types to match correctly, bu
Andrew Coppin wrote:
> I offer up the following example:
>
> mean xs = sum xs / length xs
>
> Now try, say, "mean [1.. 1e9]", and watch GHC eat several GB of RAM. (!!)
>
> If we now rearrange this to
>
> mean = (\(s,n) -> s / n) . foldr (\x (s,n) -> let s' = s+x; n' = n+1 in s'
> `seq` n' `s
On Mon, 2008-05-12 at 20:01 +0100, Andrew Coppin wrote:
> In short, as a fairly new Haskell programmer, I find it completely
> impossibly to write code that doesn't crawl along at a snail's pace.
> Even when I manage to make it faster, I usually have no clue why. (E.g.,
> adding a seq to a merg
Don Stewart wrote:
jeff.polakow:
Hello,
One frequent criticism of Haskell (and by extension GHC) is that it has
unpredictable performance and memory consumption. I personally do not find
this to be the case. I suspect that most programmer confusion is rooted in
shaky knowledge
As a beginner, I had found the behaviour quite unpredictable. But with time
I found that I could reason out the behaviour with my slowly growing
knowledge of laziness. I don't spot all the places in my program that will
suck while writing a program, but post facto many things become clear. (And
the
On Fri, May 09, 2008 at 02:24:12PM -0700, Don Stewart wrote:
> jeff.polakow:
> > Hello,
> >
> > One frequent criticism of Haskell (and by extension GHC) is that it has
> > unpredictable performance and memory consumption. I personally do not find
> > this to be the case. I suspect that most progra
jeff.polakow:
>Hello,
>
>One frequent criticism of Haskell (and by extension GHC) is that it has
>unpredictable performance and memory consumption. I personally do not find
>this to be the case. I suspect that most programmer confusion is rooted in
>shaky knowledge of lazy eval
Hello,
One frequent criticism of Haskell (and by extension GHC) is that it has
unpredictable performance and memory consumption. I personally do not find
this to be the case. I suspect that most programmer confusion is rooted in
shaky knowledge of lazy evaluation; and I have been able to fix, w
31 matches
Mail list logo