#x27;d have to modify it to remove
> DphOps*.
>
> https://github.com/ghc/packages-dph/blob/master/dph-event-seer/src/Main.hs
>
> Amos
>
> On Tue, 31 Mar 2015 at 04:38 Dominic Steinitz wrote:
> Does anyone know of any tools for analysing parallel program performance?
>
Does anyone know of any tools for analysing parallel program performance?
I am trying to use threadscope but it keeps crashing with my 100M log file and
ghc-events-analyze is not going to help as I have many hundreds of threads all
carrying out the same computation. I think I’d like a library
f this detracts from my original thrust, which is that
something that looks like an afternoon's work is much more complicated.
Plus, you'll end up fighting with/hacking on a build system instead of what
you meant to work on.
>
> Cheers,
> Simon
>
>
>
>
>
>>
y hurts, then you could have a way to tell your build system about
a file when it is removed from the project, so that it can delete the
build artifacts that go with it.
Anyway, are there other problems you'd like to bring to our attention?
Cheers,
Simon
Also, the most common use cas
c/ghc/ticket/8029 ). The workarounds to deal
with this are not as straightforward. The alternative is to live with the
occasional build error that can only be fixed by blowing away the entire
build dir (a remedy that I often need with ghc's source tree, as even make
maintainer-clean doesn't
venient way to create a common build
directory and manage multiple targets. This is the approach I would
take to building multiple executables from the same source files.
ghc doesn't do any locking of build files AFAIK. Running parallel ghc
commands for two main modules that have the same imp
On Sun, Jan 5, 2014 at 3:54 PM, Erik de Castro Lopo wrote:
> John Lato wrote:
>
> > ghc --make doesn't allow building several binaries in one run, however if
> > you use cabal all the separate runs will use a shared build directory, so
> > subsequent builds will be able to take advantage of the in
John Lato wrote:
> ghc --make doesn't allow building several binaries in one run, however if
> you use cabal all the separate runs will use a shared build directory, so
> subsequent builds will be able to take advantage of the intermediate output
> of the first build.
As long as the ghc-options f
roach I would take to building multiple
executables from the same source files.
ghc doesn't do any locking of build files AFAIK. Running parallel ghc
commands for two main modules that have the same import, using the same
working directory, is not safe. In pathological cases the two diffe
Hi,
I have a Haskell project where a number of executables are produced
from mostly the same modules. I'm using a Makefile to enable parallel
builds. I received advice[1] that ghc -M is broken, but that there
is parallel ghc --make in HEAD.
As far as I can tell, ghc --make does not
In case anyone wants to contribute to it, I have submitted a bug report [1].
Best,
Facundo
[1] https://ghc.haskell.org/trac/ghc/ticket/8521
On Mon, Oct 21, 2013 at 9:03 AM, Facundo Domínguez
wrote:
>> Oh I see; the problem is the GHC RTS is attempting to shut down,
>> and in order to do this it
> Oh I see; the problem is the GHC RTS is attempting to shut down,
> and in order to do this it needs to grab all of the capabilities.
Thanks, again. However, the program doesn't seem to be blocking when
the main thread finishes, but rather in the "takeMVar mv1" line. I'm
copying the modified vers
Oh I see; the problem is the GHC RTS is attempting to shut down,
and in order to do this it needs to grab all of the capabilities. However,
one of them is in an uninterruptible loop, so the program hangs (e.g.
if you change the program as follows:
main :: IO ()
main = do
forkIO $ do
On 26/06/2012 00:42, Ryan Newton wrote:
However, the parallel GC will be a problem if one or more of your
cores is being used by other process(es) on the machine. In that
case, the GC synchronisation will stall and performance will go down
the drain. You can often see this on a
port back. Although
>> I've since found that out of 3 not-identical systems, this problem
>> only occurs on one. So I may try different kernel/system libs and see
>> where that gets me.
>>
>> -qg is funny. My interpretation from the results so far is that, when
&g
>
> However, the parallel GC will be a problem if one or more of your cores is
> being used by other process(es) on the machine. In that case, the GC
> synchronisation will stall and performance will go down the drain. You can
> often see this on a ThreadScope profile as a big
funny. My interpretation from the results so far is that, when
the parallel collector doesn't get stalled, it results in a big win.
But when parGC does stall, it's slower than disabling parallel gc
entirely.
Parallel GC is usually a win for idiomatic Haskell code, it may or may
not be a good idea
Bryan O'Sullivan :
> On Mon, Jun 18, 2012 at 9:32 PM, John Lato wrote:
>
> I had thought the last core parallel slowdown problem was fixed a
> while ago, but apparently not?
>
> Simon Marlow has thought so in the not too distant past (since he did the
> work), if
On 19/06/2012, at 13:53 , Ben Lippmeier wrote:
>
> On 19/06/2012, at 10:59 , Manuel M T Chakravarty wrote:
>
>> I wonder, do we have a Repa FAQ (or similar) that explain such issues? (And
>> is easily discoverable?)
>
> I've been trying to collect the main points in the haddocs for the main
On 19/06/2012, at 10:59 , Manuel M T Chakravarty wrote:
> I wonder, do we have a Repa FAQ (or similar) that explain such issues? (And
> is easily discoverable?)
I've been trying to collect the main points in the haddocs for the main module
[1], but this one isn't there yet.
I need to update t
On Mon, Jun 18, 2012 at 9:32 PM, John Lato wrote:
>
> I had thought the last core parallel slowdown problem was fixed a
> while ago, but apparently not?
>
Simon Marlow has thought so in the not too distant past (since he did the
work), if my recollectio
ar is that, when
the parallel collector doesn't get stalled, it results in a big win.
But when parGC does stall, it's slower than disabling parallel gc
entirely.
I had thought the last core parallel slowdown problem was fixed a
while ago, but apparently not?
Thanks,
John
On Tue, Jun 19,
>> everytime the OS needs to do something or something?
>
> This can be a problem for data parallel computations (like in Repa). In Repa
> all threads in the gang are supposed to run for the same time, but if one
> gets swapped out by the OS then the whole gang is stalled
u used all the CPUs on your machine under Linux?
>
> Presumably very tight coupling that is causing all the threads to stall
> everytime the OS needs to do something or something?
This can be a problem for data parallel computations (like in Repa). In Repa
all threads in the gang are supposed to run f
On June 18, 2012 04:20:51 John Lato wrote:
> Given this, can anyone suggest any likely causes of this issue, or
> anything I might want to look for? Also, should I be concerned about
> the much larger gc_alloc_block_sync level for the slow run? Does that
> indicate the allocator waiting to alloc
Hello,
I have a program that is intermittently experiencing performance
issues that I believe are related to parallel GC, and I was hoping to
get some advice on how I might improve it. Essentially, any given
execution is either slow or fast (the same executable, without
recompiling), most often
On 24/02/2012 08:16, Conrad Parker wrote:
Hi,
recently we've been tweaking our internal build system at Tsuru to
handle parallel builds of both cabal packages via 'cabal-sort
--makefile' and our local code tree via 'ghc -M'. In addition to the
recompilation checker fixe
ctory/src/System-Directory.html#createDirectoryIfMissing
But maybe the docs could reflect that
-Ryan
On Fri, Feb 24, 2012 at 3:16 AM, Conrad Parker wrote:
> Hi,
>
> recently we've been tweaking our internal build system at Tsuru to
> handle parallel builds of both cabal pa
Hi,
recently we've been tweaking our internal build system at Tsuru to
handle parallel builds of both cabal packages via 'cabal-sort
--makefile' and our local code tree via 'ghc -M'. In addition to the
recompilation checker fixes of #5878, the following would be great t
> Ah, but you're measuring the startup time of ghc --make, which is not the
same as the work that each individual ghc would do if ghc were invoked
separately on each module, for two reasons:
Excellent, sign me up for this plan then :) ghc on a single file is very
quick.
__
On 03/09/2011 02:05, Evan Laforge wrote:
Another way to do this would be to have GHC --make invoke itself to
compile each module separately. Actually I think I prefer this method,
although it might be a bit slower since each individual compilation has
to read lots of interface files. The main G
>> Another way to do this would be to have GHC --make invoke itself to
>> compile each module separately. Actually I think I prefer this method,
>> although it might be a bit slower since each individual compilation has
>> to read lots of interface files. The main GHC --make process would do
>> t
Hi,
Am Freitag, den 02.09.2011, 09:07 +0100 schrieb Simon Marlow:
> On 01/09/2011 18:02, Evan Laforge wrote:
> >>> It's an interesting idea that I hadn't thought of. There would have to be
> >>> an atomic file system operation to "commit" a compiled module - getting
> >>> that
> >>> right could
On 01/09/2011 18:02, Evan Laforge wrote:
It's an interesting idea that I hadn't thought of. There would have to be
an atomic file system operation to "commit" a compiled module - getting that
right could be a bit tricky (compilation isn't deterministic, so the commit
has to be atomic).
I suppo
>> It's an interesting idea that I hadn't thought of. There would have to be
>> an atomic file system operation to "commit" a compiled module - getting that
>> right could be a bit tricky (compilation isn't deterministic, so the commit
>> has to be atomic).
>
> I suppose you could just rename it i
On Thu, Sep 1, 2011 at 8:49 AM, Simon Marlow wrote:
> On 01/09/2011 08:44, Evan Laforge wrote:
>
>> Yes, the plan was to eventually have a parallel --make mode.
>>>
>>
>> If that's the goal, wouldn't it be easier to start many ghcs?
>>
>
On 01/09/2011 08:44, Evan Laforge wrote:
Yes, the plan was to eventually have a parallel --make mode.
If that's the goal, wouldn't it be easier to start many ghcs?
It's an interesting idea that I hadn't thought of. There would have to
be an atomic file system ope
On 1 September 2011 08:44, Evan Laforge wrote:
>> Yes, the plan was to eventually have a parallel --make mode.
>
> If that's the goal, wouldn't it be easier to start many ghcs?
Yes. With Scion I'm in the process of moving away from using GHC's
compilation manage
> Yes, the plan was to eventually have a parallel --make mode.
If that's the goal, wouldn't it be easier to start many ghcs?
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/lis
On 30/08/2011 00:42, Thomas Schilling wrote:
The performance problem was due to the use of unsafePerformIO or other
thunk-locking functions. The problem was that such functions can
cause severe performance problems when using a deep stack. The
problem is that these functions need to traverse th
ll re-entrantly...
> what neat stuff could we do with a re-entrant ghc? Could it
> eventually lead to an internally parallel ghc or are there deeper
> reasons it's hard to parallelize compilation? That would be really
> cool, if possible. In fact, I don't know of any para
safe ghc provide? I
know ghc api users have go to some bother to not call re-entrantly...
what neat stuff could we do with a re-entrant ghc? Could it
eventually lead to an internally parallel ghc or are there deeper
reasons it's hard to parallelize compilation? That would be really
cool, i
The performance problem was due to the use of unsafePerformIO or other
thunk-locking functions. The problem was that such functions can
cause severe performance problems when using a deep stack. The
problem is that these functions need to traverse the stack to
atomically claim thunks that might b
On 27 August 2011 09:00, Evan Laforge wrote:
> Right, that's probably the one I mentioned. And I think he was trying
> to parallelize ghc internally, so even compiling one file could
> parallelize. That would be cool and all, but seems like a lot of work
> compared to just parallelizing at the f
> From what I remember someone tried to parallelize GHC but it turned
> out to me tricky in practice. At the moment very trying to parallelize
> Cabal which would allow us to build packages/modules in parallel using
> ghc -c and let Cabal handle dependency management (including
> p
rned
out to me tricky in practice. At the moment very trying to parallelize
Cabal which would allow us to build packages/modules in parallel using
ghc -c and let Cabal handle dependency management (including
preprocessing of .hsc files).
Johan
___
Glasg
GHC can saturate them all. Can validate GHC in well under 10
>> minutes on it.
>
> To wander a bit from the topic, when I first saw this I thought "wow,
> ghc builds in parallel now, I want that" but then I realized it's
> because ghc itself uses make, not --make. --m
On Tue, Dec 7, 2010 at 12:00 PM, Bulat Ziganshin
wrote:
> Hello John,
>
> Tuesday, December 7, 2010, 11:54:22 AM, you wrote:
>
>> The bottleneck for building on my multi-core machine is ld, which
>
> afaik, there was some alternative linker, at least for linux systems
gold, developed by Google.
Hello John,
Tuesday, December 7, 2010, 11:54:22 AM, you wrote:
> The bottleneck for building on my multi-core machine is ld, which
afaik, there was some alternative linker, at least for linux systems
--
Best regards,
Bulatmailto:bulat.zigans...@gmail.com
___
On 7 December 2010 08:54, John Smith wrote:
> Gold in an incremental and multi-threaded linker, but can only output ELF
> (not Windows). Is there a cross-platform solution suitable for GHC?
Not AFAIK. One thing that would probably help a lot is if
GHC-generated code stopped causing the linker to
parallel (or am I being naive?)
Gold in an incremental and multi-threaded linker, but can only output ELF (not Windows). Is there a cross-platform
solution suitable for GHC?
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http
* Ian Lynagh:
> On Thu, Sep 30, 2010 at 08:13:01PM +0200, Florian Weimer wrote:
>> <http://hackage.haskell.org/trac/ghc/wiki/Building/Hacking> says that
>> parallel builds are supported, but this doesn't seem to be true
>> anymore.
>
> It's still tru
On Thu, Sep 30, 2010 at 08:13:01PM +0200, Florian Weimer wrote:
> <http://hackage.haskell.org/trac/ghc/wiki/Building/Hacking> says that
> parallel builds are supported, but this doesn't seem to be true
> anymore.
It's still true. If you're having problems, plea
<http://hackage.haskell.org/trac/ghc/wiki/Building/Hacking> says that
parallel builds are supported, but this doesn't seem to be true
anymore. This is a bit unfortunate because in theory, builds should
parallelize quite well.
___
Glasgow-ha
ting quite a nice talk given by Simon PJ recently in Boston
http://pls.posterous.com/simon-peyton-jones-on-data-parallel-haskell
Related to some of the questions asked at the talk, I would be curious to hear
any comments regarding adding support for processor level SIMD vectorization
(e.g., the SSE{1,2,3} i
> From: Roman Leshchinskiy
Following on this discussion, I have an algorithm that currently uses
BLAS to do the heavy work. I'd like to try to get it working with DPH
or Repa, although my prior attempts have been less than successful.
I have a vector of vectors where each element depends upon
On 04/05/2010, at 18:37, Christian Höner zu Siederdissen wrote:
> * Roman Leshchinskiy [04.05.2010 10:02]:
>> On 04/05/2010, at 11:10, Christian Höner zu Siederdissen wrote:
>>
>>> Here http://www.tbi.univie.ac.at/newpapers/Abstracts/98-06-009.ps.gz is
>>> a
. Would something like
> this be useful?
This would be very useful in general, as a number of algorithms that now
require lazy arrays or ST/IO could be written with pure code. With the
correct index transformation, it should be possible to have everything
laid out nicely.
>
> > Here h
be safe and pure. Would something like this
be useful?
> Here http://www.tbi.univie.ac.at/newpapers/Abstracts/98-06-009.ps.gz is
> a description of a parallel version of RNAfold.
IIUC, this parallelises processing of each diagonal but computes the diagonals
one after another. Could you perhaps store each di
* Roman Leshchinskiy [04.05.2010 02:32]:
> On 04/05/2010, at 09:21, Christian Höner zu Siederdissen wrote:
>
> > Hi,
> >
> > on that topic, consider this (rather trivial) array:
> >
> > a = array (1,10) [ (i,f i) | i <-[1..10]] where
> > f 1 = 1
> > f 2 = 1
> > f i = a!(i-1) + a!(i-2)
> >
>
description of a parallel version of RNAfold.
>
> Repa arrays don't support visible destructive update. For many algorithms you
> should't need it, and it causes problems for parallelisation.
>
> I'm actively writing more Repa examples now. Can you sent me s
On 04/05/2010, at 09:21, Christian Höner zu Siederdissen wrote:
> Hi,
>
> on that topic, consider this (rather trivial) array:
>
> a = array (1,10) [ (i,f i) | i <-[1..10]] where
> f 1 = 1
> f 2 = 1
> f i = a!(i-1) + a!(i-2)
>
> (aah, school ;)
>
> Right now, I am abusing vector in ST by do
On 03/05/2010, at 10:04 PM, Johan Tibell wrote:
> On Mon, May 3, 2010 at 11:12 AM, Simon Peyton-Jones
> wrote:
> | Does this mean DPH is ready for abuse?
> |
> | The wiki page sounds pretty tentative, but it looks like it's been awhile
> | since it's been updated.
> |
> | http://www.haskell.org
You can certainly create an array with these values, but in the provided code
it looks like each successive array element has a serial dependency on the
previous two elements. How were you expecting it to parallelise?
Repa arrays don't support visible destructive update. For many algorithms you
choener:
>
> To summarise: I need arrays that allow in-place updates.
Many of the array libraries provide both mutable and immutable
interfaces, typically in ST or IO, including vector.
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskel
Sorry, to make it more clear:
in the line:
> write a (a'!(i-1) + a!(i-2))
only
> (a'!(i-1) + a!(i-2))
would need to be parallel, as there we typically have a sum/minimum or
whatever. The forM_ over each index does not need to be, since we have
to fill the array anyway...
* Ch
tian
* Duncan Coutts [30.04.2010 17:11]:
> On Fri, 2010-04-30 at 10:25 -0400, Tyson Whitehead wrote:
> > On April 30, 2010 06:32:55 Duncan Coutts wrote:
> > > In the last few years GHC has gained impressive support for parallel
> > > programming on commodity multi-core
On 03/05/2010, at 22:04, Johan Tibell wrote:
> On Mon, May 3, 2010 at 11:12 AM, Simon Peyton-Jones
> wrote:
> | Does this mean DPH is ready for abuse?
> |
> | The wiki page sounds pretty tentative, but it looks like it's been awhile
> | since it's been updated.
> |
> | http://www.haskell.org/has
On Mon, May 3, 2010 at 11:12 AM, Simon Peyton-Jones
wrote:
> | Does this mean DPH is ready for abuse?
> |
> | The wiki page sounds pretty tentative, but it looks like it's been awhile
> | since it's been updated.
> |
> | http://www.haskell.org/haskellwiki/GHC/Data_Parallel_Haskell
>
> In truth, ne
o be ready
for abuse :-). We have not lost enthusiasm though -- Manual, Roman, Gabi,
Ben, and I talk on the phone each week about it. I think we'll have something
usable by the end of the summer.
Meanwhile, as Duncan mentioned, the regular, shape-polymorphic data-parallel
Repa library (d
On Fri, 2010-04-30 at 10:25 -0400, Tyson Whitehead wrote:
> On April 30, 2010 06:32:55 Duncan Coutts wrote:
> > In the last few years GHC has gained impressive support for parallel
> > programming on commodity multi-core systems. In addition to traditional
> > threads and
On April 30, 2010 06:32:55 Duncan Coutts wrote:
> In the last few years GHC has gained impressive support for parallel
> programming on commodity multi-core systems. In addition to traditional
> threads and shared variables, it supports pure parallelism, software
> transactional memor
GHC HQ and Well-Typed are very pleased to announce a 2-year project
funded by Microsoft Research to push the real-world adoption and
practical development of parallel Haskell with GHC.
We are seeking organisations to take part: read on for details.
In the last few years GHC has gained
the whole point: the optimum is a
function of the machine. If you hardcode the granularity then your code isn't
future proof and isn't portable.
While that's true, in practice it's often not a problem.
Sometimes you can pick a granularity that is small enough to give you
t
ation-specific, using them effectively
may require knowing something about the implementation. For instance,
in GHC 6.12 we reduced some overheads and made it possible to
parallelise some fine-grained parallel problems that previously resulted
in slowdown on a multicore.
(I'll respond to the
On Mon, Dec 28, 2009 at 2:01 PM, Jon Harrop wrote:
> On Monday 28 December 2009 12:56:17 Yitzchak Gale wrote:
>> This discussion definitely does not belong on the
>> Haskell-Beginners list. Besides not being a topic
>> for beginners, being there is keeping it under the
>> radar of many or most of
On Monday 28 December 2009 12:56:17 Yitzchak Gale wrote:
> This discussion definitely does not belong on the
> Haskell-Beginners list. Besides not being a topic
> for beginners, being there is keeping it under the
> radar of many or most of the people who work on
> these things in Haskell.
>
> I am
timum is a
> function of the machine. If you hardcode the granularity then your code isn't
> future proof and isn't portable.
>
> From chapter 24 of Real World Haskell on sorting:
>
> "At this fine granularity, the cost of using par outweighs any possible
> usefulnes
I've just uploaded parallel-2.0.0.0 to Hackage. If you're using
Strategies at all, I'd advise updating to this version of the parallel
package. It's not completely API compatible, but if you're just using
the supplied Strategies such as parList, the changes should be
On 07/05/2009 18:12, Brandon S. Allbery KF8NH wrote:
On May 7, 2009, at 06:27 , Neil Mitchell wrote:
If however I run it with runhaskell Test.hs +RTS -N2 I get told the
-N2 flag isn't supported. Is there a way to runhaskell a program on
As a workaround you could use 'ghc -e main foo.hs +RTS -N
On 07/05/2009 23:58, Duncan Coutts wrote:
Note that for the next ghc release the process library will use a
different implementation of waitForProcess (at least on Unix) so will
not need multiple OS threads to wait for multiple processes
simultaneously.
It will if I ever get it finished...
Ch
On Thu, 2009-05-07 at 15:12 +0100, Neil Mitchell wrote:
> >> This is a test framework that spawns system commands. My guess is the
> >> Haskell accounts for a few milliseconds of execution per hour. Running
> >> two system commands in parallel gives a massive boost.
On May 7, 2009, at 06:27 , Neil Mitchell wrote:
If however I run it with runhaskell Test.hs +RTS -N2 I get told the
-N2 flag isn't supported. Is there a way to runhaskell a program on
As a workaround you could use 'ghc -e main foo.hs +RTS -N2'.
That works great :-) Perhaps this trick should b
Hi
>> Isn't ghc -e using the byte-code interpreter?
>
> Yes; apparently it "works", though we still haven't stress-tested it running
> real parallel programs using GHCi with +RTS -N2.
It seemed perfectly stable when I tried, on a few examples I had
knockin
Hi Bulat,
> Neil, you can implement it by yourself - convert -j3 in cmdline to
> +RTS -N3 -RTS and run program itself. alternatively, you can use
> defaultsHook() although i'm not sure that it can change number of
> Capabilities
Can I run a program itself? getProgName doesn't give me enough to
in
Hello Simon,
Thursday, May 7, 2009, 5:24:54 PM, you wrote:
>> A related question I wanted to ask. Is there any way to have my
>> Haskell program support -j3, which is equivalent to +RTS -N3 -RTS. At
>> the moment I've set this up with a shell script to translate the -j3,
>> but a nicer method wou
Hello Simon,
Thursday, May 7, 2009, 5:27:04 PM, you wrote:
>> my own program creates a lot of parallel threads without using -N
>>
>> the secret is using of forkOS plus C code in threads. since you
>> spend time in system calls this should also work for you
> Are yo
On 07/05/2009 11:37, Bulat Ziganshin wrote:
Hello Neil,
Thursday, May 7, 2009, 2:27:34 PM, you wrote:
This is a test framework that spawns system commands. My guess is the
Haskell accounts for a few milliseconds of execution per hour. Running
two system commands in parallel gives a massive
er ghc -e, so couldn't it share
the same mechanism?
What's interesting to me is whether the byte-code interpreter will work right
with +RTS -N2
Isn't ghc -e using the byte-code interpreter?
Yes; apparently it "works", though we still haven't stress-tested it
Hello Neil,
Thursday, May 7, 2009, 2:27:34 PM, you wrote:
> This is a test framework that spawns system commands. My guess is the
> Haskell accounts for a few milliseconds of execution per hour. Running
> two system commands in parallel gives a massive boost.
> A related question
you want performance you should start by
> compiling your program.
This is a test framework that spawns system commands. My guess is the
Haskell accounts for a few milliseconds of execution per hour. Running
two system commands in parallel gives a massive boost.
A related question I wanted to ask. I
On 06/05/2009 17:19, Neil Mitchell wrote:
I've got a program which I'd like to run on multiple threads. If I
compile it with ghc --make -threaded, then run with +RTS -N2 it runs
on 2 cores very nicely.
If however I run it with runhaskell Test.hs +RTS -N2 I get told the
-N2 flag isn't supported.
Hi,
I've got a program which I'd like to run on multiple threads. If I
compile it with ghc --make -threaded, then run with +RTS -N2 it runs
on 2 cores very nicely.
If however I run it with runhaskell Test.hs +RTS -N2 I get told the
-N2 flag isn't supported. Is there a way to runhaskell a program
My first post was comparing almost identical machines: Different Q6600
steppings (the earlier chip makes a good space heater!) on different
motherboards, same memory, both stock speeds.
In a few weeks when the semester ends, I'll be able to try Linux -vs-
BSD -vs- OS X on identical hardware
On April 21, 2009 04:39:40 Simon Marlow wrote:
> > These ratios match up like physical constants, or at least invariants of
> > my Haskell implementation. However, the user time is constant on OS X, so
> > these ratios reflect the actual parallel speedup on OS X. The user time
&
2009/4/21 Don Stewart :
> Little advice and tidbits are creeping out of Simon's head.
>
> Is it time for a parallel performance wiki, where every question that
> becomes an FAQ gets documented live?
>
> http://haskell.org/haskellwiki/Performance/Parallel
>
> Maybe
183.7 149.3
> > 466.4 479.0 505.2 528.1
> > 1.00 1.91 2.75 3.54
> >
> > OS X
> > 2.4 GHx Q6600
> > 1 2 3 4
> > 676.9 359.4 246.7 191.4
> > 673.4 673.7 675.9 674.8
> > 0.99 1.87 2.74 3.53
&g
>> > Yes, what's happening is this: GHC 6.10.2 contains some slightly bogus
>> > heuristics about when to turn on the parallel GC, and it just so
>> > happens that 8 processors tips it over the point where the parallel GC
>> > is enabled for young-genera
lowdown is unlikely to be related to these bits).
> >
> > Yes, what's happening is this: GHC 6.10.2 contains some slightly bogus
> > heuristics about when to turn on the parallel GC, and it just so
> > happens that 8 processors tips it over the point where the parallel GC
&
e bits).
>
> Yes, what's happening is this: GHC 6.10.2 contains some slightly bogus
> heuristics about when to turn on the parallel GC, and it just so
> happens that 8 processors tips it over the point where the parallel GC
> is enabled for young-generation collections. In 6.10.2
1 - 100 of 193 matches
Mail list logo