Hi
> Yes, though testing stackGobbler with a large enough data set could
> be problematic for the very reason we've been discsussing.
Yes you are sure, or yes you tested and the results show than
neilGobbler is x% slower and consume y% more memory on specific test
n?
> But let's say your hypothe
Adrian Hey wrote:
AFAICT neilGobbler isn't even entirely safe as an implementation of
an eager take. There's nothing the Haskell standard to stop it being
transformed into..
neilGobbler :: Int -> [x] -> [x]
neilGobbler n xs = length (take n xs) `seq` take n xs
Whoops, I see stackGobbler has th
Neil Mitchell wrote:
Hi
But the point is that *both* heapGobbler and neilGobbler are likely to
be slower and chew through at least twice as much heap as stackGobbler,
which would be the implementation of choice for both simplicity and
performance if it wasn't for this stack management problem.
Hi
> But the point is that *both* heapGobbler and neilGobbler are likely to
> be slower and chew through at least twice as much heap as stackGobbler,
> which would be the implementation of choice for both simplicity and
> performance if it wasn't for this stack management problem.
Sure? That soun
Neil Mitchell wrote:
Hi
I have already partially scanned the list looking for the first
element that satisfies some condition, using a tail recursive search.
If such an element is found I want to split the list at that point.
span/break? I really can't believe the memory overhead of span is
Hi
> I have already partially scanned the list looking for the first
> element that satisfies some condition, using a tail recursive search.
>
> If such an element is found I want to split the list at that point.
span/break? I really can't believe the memory overhead of span is that
terrible, you
Neil Mitchell wrote:
Hi
If you mean an example of coding style and choice of stack vs. heap,
I already have..
http://www.haskell.org/pipermail/haskell-cafe/2008-January/038832.html
I'm at a loss as why you want a strict version of take. It's clearly
not for performance, as it performs slow
Bulat Ziganshin wrote:
Hello Matthew,
Monday, February 4, 2008, 11:45:51 PM, you wrote:
That would be nice. But its only beneficial if there are programs
which takes large amounts of stack at some point, but then shrink down
to very little stack and continue for a reasonable amount of time.
Hi
> If you mean an example of coding style and choice of stack vs. heap,
> I already have..
>
> http://www.haskell.org/pipermail/haskell-cafe/2008-January/038832.html
I'm at a loss as why you want a strict version of take. It's clearly
not for performance, as it performs slower. I'd say both t
On Feb 5, 2008 6:50 PM, Adrian Hey <[EMAIL PROTECTED]> wrote:
>
> Luke Palmer wrote:
> > On Feb 5, 2008 2:50 AM, Adrian Hey <[EMAIL PROTECTED]> wrote:
> >> I think it bites a lot less often than it otherwise would because most
> >> people will deliberately chose to use heap in preference to stack (
Luke Palmer wrote:
On Feb 5, 2008 2:50 AM, Adrian Hey <[EMAIL PROTECTED]> wrote:
I think it bites a lot less often than it otherwise would because most
people will deliberately chose to use heap in preference to stack (at
least when writing "eager" code) just to avoid the problem. But it still
b
On Feb 5, 2008 2:50 AM, Adrian Hey <[EMAIL PROTECTED]> wrote:
> I think it bites a lot less often than it otherwise would because most
> people will deliberately chose to use heap in preference to stack (at
> least when writing "eager" code) just to avoid the problem. But it still
> bites pretty of
| > Yes, this is the standard solution, and it's a good one because it has a
robust cost model (no quadratic
| costs). However, it's tricky to get right; copying is simpler. If a
significant fraction of runtime (for some
| interesting program(s)) turned out to be consumed by copying stacks the
Simon Peyton-Jones wrote:
| First bad thing:
| Stack size (memory consumed) doubles each time it overflows.
|
| Second bad thing:
| Arbitrary limit on stack size unrelated to overall (heap) memory
| available.
|
| Third bad thing (the really bad thing):
| If a stack has temporarily grown (to 64M
| First bad thing:
| Stack size (memory consumed) doubles each time it overflows.
|
| Second bad thing:
| Arbitrary limit on stack size unrelated to overall (heap) memory
| available.
|
| Third bad thing (the really bad thing):
| If a stack has temporarily grown (to 64M say), it will never shrink
Stefan O'Rear wrote:
On Mon, Feb 04, 2008 at 10:13:12PM +, Adrian Hey wrote:
Also
remember that this behaviour never wastes more than 50% of the stack,
which is a relatively small amount.
Only if the stack is relatively small. Would you say the same about
heap, or about a stack that only ne
On Mon, Feb 04, 2008 at 10:13:12PM +, Adrian Hey wrote:
>> Also
>> remember that this behaviour never wastes more than 50% of the stack,
>> which is a relatively small amount.
>
> Only if the stack is relatively small. Would you say the same about
> heap, or about a stack that only needed 50% o
Neil Mitchell wrote:
Hi
First bad thing:
Stack size (memory consumed) doubles each time it overflows.
Bad thing? Assume that allocating memory takes some constant amount of
time, such as invoking overflow behaviour etc. To get the size of the
stack to n bytes with doubling takes O(log n), to
On Monday 04 February 2008, Neil Mitchell wrote:
> Hi
> That would be nice. But its only beneficial if there are programs
> which takes large amounts of stack at some point, but then shrink down
> to very little stack and continue for a reasonable amount of time.
From the 'when I was a lad' depar
Hi
> First bad thing:
> Stack size (memory consumed) doubles each time it overflows.
Bad thing? Assume that allocating memory takes some constant amount of
time, such as invoking overflow behaviour etc. To get the size of the
stack to n bytes with doubling takes O(log n), to get it there with a
c
Hello Simon,
Simon Peyton-Jones wrote:
| Sorry, but if what you say is true then things are even worse than I
| thought :-( This behaviour seems really bad to me, especially for
| concurrent programs.
Which behaviour precisely? Can you say what is wrong and what behaviour you
expect?
Roughl
On Monday 04 February 2008, Adrian Hey wrote:
> Yikes!
>
> Also, I can't help thinking that the common justification for the
> current limit (that it helps find alleged bugs) is a little lame.
> It only helps find bugs if one expects ones program to use less than
> 8M of stack (hence if it's using
Hi
> But if a program or library is deliberately designed to
> make use of stack (in preference to heap) for efficiency reasons
> then this is a source of bugs in otherwise perfectly correct and
> reasonable programs.
Can you give an example of a particular library or program, so
everyone can be
| Sorry, but if what you say is true then things are even worse than I
| thought :-( This behaviour seems really bad to me, especially for
| concurrent programs.
Which behaviour precisely? Can you say what is wrong and what behaviour you
expect?
S
__
Simon Peyton-Jones wrote:
| Yes, using lots of stack is clearly bad with ghc, but this is a ghc
| "bug". In fact the only reason these programs do use lots of stack
| (vs. heap) is just a peculiarity of ghc rts implementation, so it
| really should be ghc that fixes the problem, or at least admit
| Yes, using lots of stack is clearly bad with ghc, but this is a ghc
| "bug". In fact the only reason these programs do use lots of stack
| (vs. heap) is just a peculiarity of ghc rts implementation, so it
| really should be ghc that fixes the problem, or at least admits
| responsibility :-)
I do
On Tue, 2008-01-29 at 08:18 +, Adrian Hey wrote:
> Derek Elkins wrote:
> > While perhaps for a simple throw-away program it may be beneficial to
> > write code that allocates unnecessary stack, I personally consider
> > unnecessary stack use a bug. A stack overflow, to me, is always
> > indica
Neil Mitchell wrote:
My claim is that "any program which needs to adjust the stack size has
a laziness leak" - since I've made a universally quantified claim, a
couple of real examples should blow it out of the water.
But people often deliberately introduce lazyness leaks for improved
efficenc
On Tue, Jan 29, 2008 at 07:38:24PM +, Neil Mitchell wrote:
> > A lot also depends on compiler (and associated rts), such as whether
> > or not it translates to CPS, thereby in effect building a "stack" (in
> > all but name) on the heap.
>
> If you burn a lot of heap, for not much gain, that's
On Tue, Jan 29, 2008 at 09:28:56AM +, Neil Mitchell wrote:
> Hi Adrian,
>
> > The "bug" is in ghc stack management. Why is it so important that the
> > stack size is arbitrarily limited?
>
> It's not, but it makes some things easier and faster. A better
> question is why is it important for t
ndmitchell:
> Hi
>
> > implementations. Certainly my experience of library tuning tells
> > me that (with ghc at least), designing your code and data structures
> > to keep heap allocation down to an absolute minimum is very important.
>
> Yes. Keeping allocation low is very important, be it heap
Hi
> implementations. Certainly my experience of library tuning tells
> me that (with ghc at least), designing your code and data structures
> to keep heap allocation down to an absolute minimum is very important.
Yes. Keeping allocation low is very important, be it heap or stack.
Heap allocation
Jonathan Cast wrote:
http://www.cs.princeton.edu/~appel/papers/45.ps is the traditional cite
here, no?
"Can be" is not the same as "is". A lot depends on exactly what you
call a "stack" and the relative efficiencies of stack vs. heap
implementations. Certainly my experience of library tuning te
On 29 Jan 2008, at 1:28 AM, Neil Mitchell wrote:
Hi Adrian,
The "bug" is in ghc stack management. Why is it so important that the
stack size is arbitrarily limited?
It's not, but it makes some things easier and faster. A better
question is why is it important for the stack to grow dynamical
Hi Adrian,
> The "bug" is in ghc stack management. Why is it so important that the
> stack size is arbitrarily limited?
It's not, but it makes some things easier and faster. A better
question is why is it important for the stack to grow dynamically. The
answer is that its not.
> It's just an int
Derek Elkins wrote:
While perhaps for a simple throw-away program it may be beneficial to
write code that allocates unnecessary stack, I personally consider
unnecessary stack use a bug. A stack overflow, to me, is always
indicative of a bug.
The "bug" is in ghc stack management. Why is it so i
On 28 Jan 2008, at 11:00 PM, Henning Thielemann wrote:
On Mon, 28 Jan 2008, Jonathan Cast wrote:
Or, to put it another way, the bugs Java's stack overflow is designed
to catch are considered good style in Haskell.
I consider explicit recursion in Haskell as bad style. One should use
higher
On Mon, 28 Jan 2008, Jonathan Cast wrote:
> Or, to put it another way, the bugs Java's stack overflow is designed
> to catch are considered good style in Haskell.
I consider explicit recursion in Haskell as bad style. One should use
higher order functions like 'map', 'fold', 'filter' and so on w
On 28 Jan 2008, at 10:07 AM, Neil Mitchell wrote:
Hi
ghc uses a pretty conventional stack AFAIK, and it is arbitrarily
limited, but you can change the limit with +RTS options.
GHC uses a conventional stack (in that you put stuff at the top, and
take it off from the top), but it is not a con
On Mon, 2008-01-28 at 14:39 -0500, istarex wrote:
> On Jan 28, 2008 1:07 PM, Neil Mitchell <[EMAIL PROTECTED]> wrote:
>
> > To answer the question if Haskell has a "stack depth restriction ...
> > like Java" the answer is no. It has a stack depth restriction, but its
> > absolutely nothing like Ja
On Jan 28, 2008 1:07 PM, Neil Mitchell <[EMAIL PROTECTED]> wrote:
> To answer the question if Haskell has a "stack depth restriction ...
> like Java" the answer is no. It has a stack depth restriction, but its
> absolutely nothing like Java in the way it uses the stack, so you
> can't compare them
Hi
> ghc uses a pretty conventional stack AFAIK, and it is arbitrarily
> limited, but you can change the limit with +RTS options.
GHC uses a conventional stack (in that you put stuff at the top, and
take it off from the top), but it is not a conventional stack in the
way imperative programs work.
Neil Mitchell wrote:
Hi Istarex,
Does Haskell have a maximum stack depth restriction like Java &
Python, or is it only limited by the maximum stack available on the
system (i.e. what would be available to a C program)?
You are probably thinking that recursive functions use up program
stack, a
Hi Istarex,
> Does Haskell have a maximum stack depth restriction like Java &
> Python, or is it only limited by the maximum stack available on the
> system (i.e. what would be available to a C program)?
You are probably thinking that recursive functions use up program
stack, and hence the stack
44 matches
Mail list logo