Excellent point. Now fixed. Thanks for the report.
Simon
| -Original Message-
| From: [EMAIL PROTECTED] [mailto:glasgow-haskell-bugs-
| [EMAIL PROTECTED] On Behalf Of Matthias Neubauer
| Sent: 27 July 2004 18:13
| To: [EMAIL PROTECTED]
| Subject: Browsing a module with exports only
Hi Tom,
I want to use the memo function for implementing a dynamic
programming algorithm in Haskell.
This is needed to cache intermediate results.
Can anyone tell me where I can find some examples that use the memo
function or even a tutorial.
Here are some refs
Hughes, 1985.
Tom writes:
I want to use the memo function for implementing a dynamic
programming algorithm in Haskell.
This is needed to cache intermediate results.
Can anyone tell me where I can find some examples that use the memo
function or even a tutorial.
You should also look at
Stretching
Note that it doesn't have to be lazy,
Most traditional dynamic programming techniques use arrays to store
intermediate values.
Lazy techniques could use a prohibative amount of memory due to the
size of the thunks.
The best general technique I have found in Haskell is to use
unboxed mutable
in section 3.17.2 case #6 of the haskell report
There is some confusing language in the report. Furthermore there is
either a bug in ghc, or hugs, depending on which way you interpret it.
it says:
# Matching against a constructor using labeled fields is the same as
# matching ordinary
The intention in the report was to match in the order listed in the
pattern - you need not consult the data declaration to understand the
ordering. I think the report is clear enough - it's just a bug in
ghc.
John
___
Haskell mailing list
[EMAIL
That was me. I think you're underestimating the cost of starting
threads even in this very lightweight world.
Maybe... Perhaps haskell could be made to resemble dataflow instructions
more... If when a computation completes we insert the result directly
into the data structure which represents
MR K P SCHUPKE wrote:
That was me. I think you're underestimating the cost of starting
threads even in this very lightweight world.
Maybe... Perhaps haskell could be made to resemble dataflow instructions
more... If when a computation completes we insert the result directly
into the data
Erm, when I said no overhead I meant there is no overhead to choosing
an instruction from a different thread compared to choosing an instruction
from the same thread... Obviously the overall scheduling overhead will
increase.
The real killer, of course, is memory latency. The cache resources