Andrew Bromage:
>Pattern matching on the LHS of a function definitions looks, for all the
>world, like a set of rewrite rules. That's because, in some sense, they
>are.
>
>In this definition:
>
>f (x:xs) = 1 + f xs
>f [] = []
>
>Intuitively, the first rule should only "fire" if the express
When I create a socket, listen, and close it, it shuts down cleanly but when I pass the socket to a new thread I get four to six of the following message:
awaitRequests: unexpected wait return code 4294967295
These seem to come after the main thread exits.
It would seem that the socket has a
G'day all.
Quoting Duncan Coutts <[EMAIL PROTECTED]>:
> Or finally, the "it's what you want most often" argument.
How about the "it's the most natural thing" argument?
Pattern matching on the LHS of a function definitions looks, for all the
world, like a set of rewrite rules. That's because, i
On Tue, 2004-03-30 at 18:01, S. Alexander Jacobson wrote:
> Thanks for the ~ syntax, but my question is really
> why you need it? What benefit do you get from
> "refutable patterns"?
>
> Alternatively, would anything break if a future
> Haskell just treated all patterns as irrefutable?
A short p
A lot. If everything were irrefutable, then the following:
> mymap f (x:xs) = f x : xs
> mymap f [] = []
would never get to the second branch and would fail when it reached the
end of a list. so would it if you put the lines in the other, "more
natural," order, though perhaps it's less clear
Thanks for the ~ syntax, but my question is really
why you need it? What benefit do you get from
"refutable patterns"?
Alternatively, would anything break if a future
Haskell just treated all patterns as irrefutable?
-Alex-
_
S. Al
On Tue, 30 Mar 2004, Simon Marlow wrote:
> The upshot of what he found is that we could benefit from some
> prefetching, perhaps on the order of 10-20%. Particularly prefetching
> in the allocation area during evaluation, to ensure that memory about to
> be written to is in the cache, and similar
> "aj" == S Alexander Jacobson <[EMAIL PROTECTED]> writes:
aj> I would assume that this function:
aj> foo list@(h:t) = list
aj> is equivalent to
aj> foo list = list
aj> where (h:t)=list
aj> But passing [] to the first generates an error
aj> even though h
tis 2004-03-30 klockan 17.30 skrev S. Alexander Jacobson:
> I would assume that this function:
>
> foo list@(h:t) = list
>
> is equivalent to
>
> foo list = list
> where (h:t)=list
>
> But passing [] to the first generates an error
> even though h and t are never used! Passing [] to
>
I would assume that this function:
foo list@(h:t) = list
is equivalent to
foo list = list
where (h:t)=list
But passing [] to the first generates an error
even though h and t are never used! Passing [] to
the second works just fine.
At this point, I sort of understand the reason for
M
As another data point, when I was writing the run-time system both for
pH (on an 8-way Sun) and later for Eager Haskell (on x86) I
experimented with simple prefetching of various sorts. Two things
seemed to be moderately effective:
* The heap was organized into "chunks" which were parceled out
On Tue, 30 Mar 2004 [EMAIL PROTECTED] wrote:
>So... was there a reason the GHC work of this research wasn't merged into the
>GHC distribution? I understood the report to say that the GHC compiler was
>indeed modified to implement the simple prefetching.
If I recall Nick's presentation of the pap
On Tuesday 30 March 2004 05:50, John Hughes wrote:
> execution time is due to data cache misses, and so a speed up of at most
> 2.5x is possible by improving the cache behaviour. That's certainly very
That's decent, if it could be achieved.
> The paper reports a 22% speed-up from prefetching to a
On Tue, 2004-03-30 at 11:51, Simon Marlow wrote:
> I've done some cache profiling of GHC's code myself, and Nick Nethercote
> did some very detailed measurements a while back (see his recent post
> for details).
>
> The upshot of what he found is that we could benefit from some
> prefetching, perh
> >From: John Hughes <[EMAIL PROTECTED]>
> >Actually the cache behaviour of code generated by GHC isn't
> at all bad.
> >I know because I ran a student project a couple of years ago to
> >implement cache-friendly optimisations. The first thing they did was
> >cache profiling of some benchmark
Adrian Hey wrote:
On Monday 29 Mar 2004 3:49 pm, John Hughes wrote:
Actually the cache behaviour of code generated by GHC isn't at all bad.
I know because I ran a student project a couple of years ago to
implement cache-friendly optimisations. The first thing they did was
cache pro
On Monday 29 Mar 2004 3:49 pm, John Hughes wrote:
> Actually the cache behaviour of code generated by GHC isn't at all bad.
> I know because I ran a student project a couple of years ago to
> implement cache-friendly optimisations. The first thing they did was
> cache profiling of some benchmarks,
17 matches
Mail list logo