Hi!
On Sat, Jul 26, 2008 at 1:35 PM, Mitar <[EMAIL PROTECTED]> wrote:
> No support for Mac OS X. :-(
Apple provides Shark in Xcode Tools which has something called "L2
Cache Miss Profile". I will just have to understand results it
produces.
Mitar
___
Hi!
On Sat, Jul 26, 2008 at 3:17 AM, Ben Lippmeier <[EMAIL PROTECTED]> wrote:
> http://valgrind.org/info/tools.html
No support for Mac OS X. :-(
Mitar
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haske
On Sat, 2008-07-26 at 03:02 +0200, Mitar wrote:
> Hi!
>
> > If we spend so long blocked on memory reads that we're only utilising
> > 50% of a core's time then there's lots of room for improvements if we
> > can fill in that wasted time by running another thread.
>
> How can you see how much doe
A tool originally developed to measure cache misses in GHC :)
Ben.Lippmeier:
>
> http://valgrind.org/info/tools.html
>
> On 26/07/2008, at 11:02 AM, Mitar wrote:
>
> >Hi!
> >
> >>If we spend so long blocked on memory reads that we're only utilising
> >>50% of a core's time then there's lots of
http://valgrind.org/info/tools.html
On 26/07/2008, at 11:02 AM, Mitar wrote:
Hi!
If we spend so long blocked on memory reads that we're only utilising
50% of a core's time then there's lots of room for improvements if we
can fill in that wasted time by running another thread.
How can you s
Hi!
> If we spend so long blocked on memory reads that we're only utilising
> 50% of a core's time then there's lots of room for improvements if we
> can fill in that wasted time by running another thread.
How can you see how much does your program wait because of L2 misses?
I have been playing l
On 25/07/2008, at 12:42 PM, Duncan Coutts wrote:
Of course then it means we need to have enough work to do. Indeed we
need quite a bit just to break even because each core is relatively
stripped down without all the out-of-order execution etc.
I don't think that will hurt too much. The code t
On 25 Jul 2008, at 10:55 am, Duncan Coutts wrote:
The problem of course is recursion and deeply nested call stacks which
don't make good use of register windows because they keep having to
interrupt to spill them to the save area.
A fair bit of thought was put into SPARC V9 to making saving an
On Fri, 2008-07-25 at 10:38 +1000, Ben Lippmeier wrote:
> I'd be more interested in the 8 x hardware threads per core, [1]
> suggests that (single threaded) GHC code spends over half its time
> stalled due to L2 data cache miss.
Right, that's what I think is most interesting and why I wanted
...
The UltraSPARC T1/T2 architecture supports very fast thread
synchronisation (by taking advantage of the fact that all threads
share the same L2 cache).
...
Ah, scratch that second part then - though this is perhaps less of an
issue when you have 4MB of L2 cache, vs the 256k cache for the m
On 25/07/2008, at 8:55 AM, Duncan Coutts wrote:
Right. GHC on SPARC has also always disabled the register window when
running Haskell code (at least for registerised builds) and only
uses it
when using the C stack and calling C functions.
I'm not sure whether register windows and continuat
On Thu, 2008-07-24 at 14:38 -0700, John Meacham wrote:
> Neat stuff. I used to work at Sun in the solaris kernel group, the SPARC
> architecture is quite elegant. I wonder if we can find an interesting
> use for the register windows in a haskell compiler. Many compilers for
> non c-like languages
Neat stuff. I used to work at Sun in the solaris kernel group, the SPARC
architecture is quite elegant. I wonder if we can find an interesting
use for the register windows in a haskell compiler. Many compilers for
non c-like languages (such as boquist's one that jhc is based on (in
spirit, if not c
On Thu, 2008-07-24 at 16:43 +1200, Richard A. O'Keefe wrote:
> On 24 Jul 2008, at 3:52 am, Duncan Coutts wrote:
> [Sun have donated a T5120 server + USD10k to develop
> support for Haskell on the SPARC.]
>
> This is wonderful news.
>
> I have a 500MHz UltraSPARC II on my desktop running Solari
On 2008 Jul 24, at 0:43, Richard A. O'Keefe wrote:
So binary distributions for SPARC/Solaris and SPARC/Linux would
be very very nice things for this new project to deliver early.
(Or some kind of source distribution that doesn't need a working
GHC to start with.
I'm still working on SPARC/Sol
On 24 Jul 2008, at 3:52 am, Duncan Coutts wrote:
[Sun have donated a T5120 server + USD10k to develop
support for Haskell on the SPARC.]
This is wonderful news.
I have a 500MHz UltraSPARC II on my desktop running Solaris 2.10.
Some time ago I tried to install GHC 6.6.1 on it, but ended up
with
http://haskell.org/opensparc/
I am very pleased to announce a joint project between Sun Microsystems
and the Haskell.org community to exploit the high performance
capabilities of Sun's latest multi-core OpenSPARC systems via Haskell!
http://opensparc.net/
Sun has donat
17 matches
Mail list logo