Re: ANNOUNCE: GHC 7.8.1 Release Candidate 1
I had some similar problems and had to fiddle with my DYLD_LIBRARY_PATH so that ghc-related executables would see the libffi.dylib that comes with GHC before any of my system-wide installed libffi.dylib. Why the permissive @rpath link for libffi.dylib if the GHC executables are supposed to come with their own? On Tue, Feb 4, 2014 at 8:15 AM, Barney Stratford barney_stratf...@fastmail.fm wrote: I've been attempting to build under Mac OS X Mavericks and have run into some problems. My iconv and gmp are installed in non-standard locations using Fink. When configuring http://www.haskell.org/ghc/dist/7.8.1-rc1/ghc-7.8.20140130-x86_64-apple-darwin-mavericks.tar.bz2I get: barneys-imac:ghc-7.8.20140130 bjs$ ./configure --prefix=/Users/bjs/Desktop/ghc --with-iconv-includes=/sw/include --with-gmp-includes=/sw/include --with-iconv-libraries=/sw/lib --with-gmp-libraries=/sw/lib configure: WARNING: unrecognized options: --with-iconv-includes, --with-iconv-libraries and then the installed executable can't itself build ghc from source because of the missing iconv libraries. I noticed that ./configure --help doesn't mention --with-iconv-* in the Mavericks install files but it does in the source build. Any ideas? Cheers, Barney. ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Call for talks: Haskell Implementors Workshop 2013, Sept 22, Boston
Please pass on this announcement! The deadline is in two weeks.* * * * * Call for Talks* ACM SIGPLAN Haskell Implementors' Workshop http://haskell.org/haskellwiki/HaskellImplementorsWorkshop/2013 Boston, USA, September 22th, 2013 The workshop will be held in conjunction with ICFP 2013 http://www.icfpconference.org/icfp2013/ *Important dates* Proposal Deadline: *13th August2013 (by midnight, any timezone)* Notification: 27th August2013 Workshop: 22th September 2013 The Haskell Implementors' Workshop is to be held alongside ICFP 2013 this year in Boston. There will be no proceedings; it is an informal gathering of people involved in the design and development of Haskell implementations, tools, libraries, and supporting infrastructure. This relatively new workshop reflects the growth of the user community: there is a clear need for a well-supported tool chain for the development, distribution, deployment, and configuration of Haskell software. The aim is for this workshop to give the people involved with building the infrastructure behind this ecosystem an opportunity to bat around ideas, share experiences, and ask for feedback from fellow experts. We intend the workshop to have an informal and interactive feel, with a flexible timetable and plenty of room for ad-hoc discussion, demos, and impromptu short talks. Scope and target audience - It is important to distinguish the Haskell Implementors' Workshop from the Haskell Symposium which is also co-located with ICFP 2013. The Haskell Symposium is for the publication of Haskell-related research. In contrast, the Haskell Implementors' Workshop will have no proceedings -- although we will aim to make talk videos, slides and presented data available with the consent of the speakers. In the Haskell Implementors' Workshop, we hope to study the underlying technology. We want to bring together anyone interested in the nitty-gritty details behind turning plain-text source code into a deployed product. Having said that, members of the wider Haskell community are more than welcome to attend the workshop -- we need your feedback to keep the Haskell ecosystem thriving. The scope covers any of the following topics. There may be some topics that people feel we've missed, so by all means submit a proposal even if it doesn't fit exactly into one of these buckets: * Compilation techniques * Language features and extensions * Type system implementation * Concurrency and parallelism: language design and implementation * Performance, optimisation and benchmarking * Virtual machines and run-time systems * Libraries and tools for development or deployment Talks - At this stage we would like to invite proposals from potential speakers for a relatively short talk. We are aiming for 20 minute talks with 10 minutes for questions and changeovers. We want to hear from people writing compilers, tools, or libraries, people with cool ideas for directions in which we should take the platform, proposals for new features to be implemented, and half-baked crazy ideas. Please submit a talk title and abstract of no more than 200 words. Submissions should be made via EasyChair. The website is: https://www.easychair.org/conferences/?conf=hiw2013 If you don't have an account you can create one here: https://www.easychair.org/account/signup.cgi Because the submission is an abstract only, please click the abstract only button when you make your submission. There is no need to attach a separate file. We will also have a lightning talks session which will be organised on the day. These talks will be 2-10 minutes, depending on available time. Suggested topics for lightning talks are to present a single idea, a work-in-progress project, a problem to intrigue and perplex Haskell implementors, or simply to ask for feedback and collaborators. Organisers -- * Ryan Newton(Indiana University) * Neal Glew (Intel Labs) * Edward Yang(Stanford University) * Thomas Schilling (University of Kent) * Geoffrey Mainland (Drexel University) ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: PSA: GHC can now be built with Clang
By the way, on the topic of these preformance comparisons -- has anyone tried building the RTS with the Intel C compiler? (They try very hard at being a drop-in GHC replacement.) ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: PSA: GHC can now be built with Clang
Err, GCC replacement. But, ironically, GHC [backend] replacement as well, as of the recent ICFP paper. On Mon, Jul 1, 2013 at 12:03 PM, Ryan Newton rrnew...@gmail.com wrote: By the way, on the topic of these preformance comparisons -- has anyone tried building the RTS with the Intel C compiler? (They try very hard at being a drop-in GHC replacement.) ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Who uses Travis CI and can help write a cookbook for those guys?
The Travis folks have decided they want to support Haskell better (multiple compiler versions): https://github.com/travis-ci/travis-ci/issues/882#issuecomment-20165378 (Yay!) They're asking for someone to help them up with setup scripts. They mention their cookbook collection here: https://github.com/travis-ci/travis-cookbooks In that thread above, I pasted our little script that fetches and installs multiple GHC versions, but I have little experiences with cloud technologies VMs. Can someone jump in and help push this forward? As a community I'm sure it would be great to get a higher percentage of Hackage packages using simple, hosted continuous testing... I'd personally like to replace my Jenkins install if they can get the necessary GHC versions in there. Best, -Ryan ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: GHC Performance Tsar
I'm particularly interested in parallel performance in the 8 core space. (In fact, we saw some regressions from 7.2-7.4 that we never tracked down properly, but maybe can now.) If the buildbot can make it easy to add a new slave machine that runs and uploads its result to a central location, then I would be happy to donate a few hours of dedicated time (no other logins) on a 32 core westmere machine, and hopefully other architectures soon. Maybe, this use case is well-covered by creating a jenkins/travis slave and letting it move the data around? (CodeSpeed looks pretty nice too.) Cheers, -Ryan On Wed, Dec 5, 2012 at 12:40 AM, Ben Lippmeier b...@ouroborus.net wrote: On 01/12/2012, at 1:42 AM, Simon Peyton-Jones wrote: | While writing a new nofib benchmark today I found myself wondering | whether all the nofib benchmarks are run just before each release, I think we could do with a GHC Performance Tsar. Especially now that Simon has changed jobs, we need to try even harder to broaden the base of people who help with GHC. It would be amazing to have someone who was willing to: * Run nofib benchmarks regularly, and publish the results * Keep baseline figures for GHC 7.6, 7.4, etc so we can keep track of regressions * Investigate regressions to see where they come from; ideally propose fixes. * Extend nofib to contain more representative programs (as Johan is currently doing). That would help keep us on the straight and narrow. I was running a performance regression buildbot for a while a year ago, but gave it up because I didn't have time to chase down the breakages. At the time we were primarily worried about the asymptotic performance of DPH, and fretting about a few percent absolute performance was too much of a distraction. However: if someone wants to pick this up then they may get some use out of the code I wrote for it. The dph-buildbot package in the DPH repository should still compile. This package uses http://hackage.haskell.org/package/buildbox-1.5.3.1 which includes code for running tests, collecting the timings, comparing against a baseline, making pretty reports etc. There is then a second package buildbox-tools which has a command line tool for listing the benchmarks that have deviated from the baseline by a particular amount. Here is an example of a report that dph-buildbot made: http://log.ouroborus.net/limitingfactor/dph/nightly-20110809_000147.txt Ben. ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: memory fragmentation with ghc-7.6.1
Hi Ben, I would bet on the same memory issues Simon mentioned. But... while you're at it would you mind trying a little experiment to share your work items through a lockfree queue rather than a TQueue? http://hackage.haskell.org/package/lockfree-queue Under some situations this can yield some benefit. But again, you didn't see workers retrying transactions so this is probably not an issue. -Ryan On Mon, Oct 1, 2012 at 4:18 AM, Simon Marlow marlo...@gmail.com wrote: Hi Ben, My guess would be that you're running into some kind of memory bottleneck. Three common ones are: (1) total memory bandwidth (2) cache ping-ponging (3) NUMA overheads You would run into (1) if you were using an allocation area size (-A or -H) larger than the L2 cache. Your stats seem to indicate that you're running with a large heap - could that be the case? (2) happens if you share data a lot between cores. It can also happen if the RTS shares data between cores, but I've tried to squash as much of that as I can. (3) is sadly something that happens on these large AMD machines (and to some extent large multicore Intel boxes too). Improving our NUMA support is something we really need to do. NUMA overheads tend to manifest as very unpredictable runtimes. I suggest using perf to gather some low-level stats about cache misses and suchlike. http://hackage.haskell.org/**trac/ghc/wiki/Debugging/** LowLevelProfiling/Perfhttp://hackage.haskell.org/trac/ghc/wiki/Debugging/LowLevelProfiling/Perf Cheers, Simon On 29/09/2012 07:47, Ben Gamari wrote: Simon Marlow marlo...@gmail.com writes: On 28/09/12 17:36, Ben Gamari wrote: Unfortunately, after poking around I found a few obvious problems with both the code and my testing configuration which explained the performance drop. Things seem to be back to normal now. Sorry for the noise! Great job on the new codegen. That's good to hear, thanks for letting me know! Of course! That being said, I have run in to a bit of a performance issue which could be related to the runtime system. In particular, as I scale up in thread count (from 6 up to 48, the core count of the machine) in my program[1] (test data available), I'm seeing the total runtime increase, as well as a corresponding increase in CPU-seconds used. This despite the RTS claiming consistently high (~94%) productivity. Meanwhile Threadscope shows that nearly all of my threads are working busily with very few STM retries and no idle time. This in an application which should scale reasonably well (or so I believe). Attached below you will find a crude listing of various runtime statistics over a variety of thread counts (ranging into what should probably be regarded as the absurdly large). The application is a parallel Gibbs sampler for learning probabilistic graphical models. It involves a set of worker threads (updateWorkers) pulling work units off of a common TQueue. After grabbing a work unit, the thread will read a reference to the current global state from an IORef. It will then begin a long-running calculation, resulting in a small value (forced to normal form with deepseq) which it then communicates back to a global update thread (diffWorker) in the form of a lambda through another TQueue. The global update thread then maps the global state (the same as was read from the IORef earlier) through this lambda with atomicModifyIORef'. This is all implemented in [2]. I do understand that I'm asking a lot of the language and I have been quite impressed by how well Haskell and GHC have stood up to the challenge thusfar. That being said, the behavior I'm seeing seems a bit strange. If synchronization overhead were the culprit, I'd expect to observe STM retries or thread blocking, which I do not see (by eye it seems that STM retries occur on the order of 5/second and worker threads otherwise appear to run uninterrupted except for GC; GHC event log from a 16 thread run available here[3]). If GC were the problem, I would expect this to be manifested in the productivity, which it is clearly not. Do you have any idea what else might be causing such extreme performance degradation with higher thread counts? I would appreciate any input you would have to offer. Thanks for all of your work! Cheers, - Ben [1] https://github.com/bgamari/**bayes-stack/v2https://github.com/bgamari/bayes-stack/v2 [2] https://github.com/bgamari/**bayes-stack/blob/v2/** BayesStack/Core/Gibbs.hshttps://github.com/bgamari/bayes-stack/blob/v2/BayesStack/Core/Gibbs.hs [3] http://goldnerlab.physics.**umass.edu/~bgamari/RunCI.**eventloghttp://goldnerlab.physics.umass.edu/~bgamari/RunCI.eventlog Performance of Citation Influence model on lda-handcraft data set 1115 arcs, 702 nodes, 50 items per node average 100 sweeps in blocks of 10, 200 topics Running with +RTS -A1G ghc-7.7 9c15249e082642f9c4c0113133afd7**8f07f1ade2 Cores User time (s)
Re: containing memory-consuming computations
Simon mentioned a system of doing multiple GC's to measure actual live data. But wouldn't a more limited alternative be capping *allocation* rather than live data? GHC already has an mechanism to preempt IO threads based on an allocation trip wire. In fact that's *the* preemption mechanism. Isn't the only piece missing to have a primitive similar to chez Scheme's make-engine: http://www.scheme.com/csug8/control.html#./control:s13 ... which would transfer control to a child computation, but would return control to the parent (along with a continuation) when its allocation budget is exhausted? Make-engine + safe-haskell + timeouts should be everything one needs to resist an adversarial untrusted program. Maybe? -Ryan P.S. Chez Scheme engines are actually related to # procedure calls, not allocation as far as I know. On Fri, Apr 20, 2012 at 7:35 PM, Edward Z. Yang ezy...@mit.edu wrote: Excerpts from Brandon Allbery's message of Fri Apr 20 19:31:54 -0400 2012: So, it would be pretty interesting if we could have an ST s style mechanism, where the data structure is not allowed to escape. But I wonder if this would be too cumbersome for anyone to use. Isn't this what monadic regions are for? That's right! But we have a hard enough time convincing people it's worth it, just for file handles. Edward ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: parallel garbage collection performance
However, the parallel GC will be a problem if one or more of your cores is being used by other process(es) on the machine. In that case, the GC synchronisation will stall and performance will go down the drain. You can often see this on a ThreadScope profile as a big delay during GC while the other cores wait for the delayed core. Make sure your machine is quiet and/or use one fewer cores than the total available. It's not usually a good idea to use hyperthreaded cores either. Does it ever help to set the number of GC threads greater than numCapabilities to over-partition the GC work? The idea would be to enable some load balancing in the face of perturbation from external load on the machine... It looks like GHC 6.10 had a -g flag for this that later went away? -Ryan ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Potential GSoC proposal: Reduce the speed gap between 'ghc -c' and 'ghc --make'
the.dead.shall.r...@gmail.com wrote: Thanks. I'll look into how to optimise .hi loading by more traditional means, then. Lennart is working on speeding up the binary package (which I believe is used to decode the .hi files.) His work might benefit this effort. Last time I tested it, mmap still offered better performance than fread on linux. In addition to improving the deserialization code it would seem like a good idea to mmap the whole file at the outset as well. It seems like readBinMem is the relevant function (readIFace - readBinIFace - readBinMem), which occurs here: https://github.com/ghc/ghc/blob/08894f96407635781a233145435a78f144accab0/compiler/utils/Binary.hs#L222 Currently it does one big hGetBuf to read the file. Since the interface files aren't changing dynamically, I think it's safe to just replace this code with an mmap. It's nice to see that we have several wrapped versions of mmap provided on hackage: http://hackage.haskell.org/package/vector-mmap http://hackage.haskell.org/package/bytestring-mmap-0.2.2 http://hackage.haskell.org/package/mmap-0.5.7 Cheers, -Ryan ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Potential GSoC proposal: Reduce the speed gap between 'ghc -c' and 'ghc --make'
Mikhail's original question was about loading interface files for entire packages with mmap. As a wild thought experiment, if GHC had a saved-heaps capability, I believe that would avoid the Unique issues with mmap'ing individual data structures that Simon mentioned. How about if each whole-package interface were then a GHC saved heap that, when booted, would become an interface server that would communicate with, and be shared by, other GHC build server processes. -Ryan On Fri, Apr 27, 2012 at 4:57 AM, Simon Marlow marlo...@gmail.com wrote: On 26/04/2012 23:32, Johan Tibell wrote: On Thu, Apr 26, 2012 at 2:34 PM, Mikhail Glushenkov the.dead.shall.r...@gmail.com** wrote: Thanks. I'll look into how to optimise .hi loading by more traditional means, then. Lennart is working on speeding up the binary package (which I believe is used to decode the .hi files.) His work might benefit this effort. We're still using our own Binary library in GHC. There's no good reason for that, unless using the binary package would be a performance regression. (we don't know whether that's the case or not, with the current binary). Cheers, Simon __**_ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.**org Glasgow-haskell-users@haskell.org http://www.haskell.org/**mailman/listinfo/glasgow-**haskell-usershttp://www.haskell.org/mailman/listinfo/glasgow-haskell-users ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Potential GSoC proposal: Reduce the speed gap between 'ghc -c' and 'ghc --make'
The idea that I currently like the most is to make it possible to save and load objects in the GHC heap format. That way, deserialisation could be done with a simple fread() and a fast pointer fixup pass, which would hopefully make running many 'ghc -c' processes as fast as a single 'ghc --make'. This trick is commonly employed in the games industry to speed-up load times [1]. Given that Haskell is a garbage-collected language, the implementation will be trickier than in C++ and will have to be done on the RTS level. Is this a good idea? How hard it would be to implement this optimisation? I believe OCaml does something like this. Interesting. What does OCaml do in this department? A bit of googling didn't turn up a link. For many years Chez scheme had a saved heaps capability. It was recently dropped because of the preponderance of SE Linux which randomizes addresses and messes it up, but here's the doc for V7: http://www.scheme.com/csug7/use.html#g10 I've always wondered why there weren't more language implementations with saved heaps. Under Chez the startup times were amazing (a 50KLOC compiler a two second load would become 4 milleseconds). Google Dart apparently has or will have saved heaps. It seems like an obvious choice (caching initialized heaps) for enormous websites with slow load times like GMail. Chez also has pretty fast serialization to a binary FASL (fast loading) format, but I'm not sure if those were mmap'ed into the heap on load or required some parsing. The gamasutra link that Mikhail provided seems to describe a process where the programmer knows exactly what the expected heap representation is for a particular object is, and manually creates it. Sounds like walking on thin ice. Do we know of any memory safe GC'd language implementations that can dump a single object (rather than the whole heap)? Would invoke the GC in a special way to trace the structure and copy it into a new region (to make it contiguous)? Cheers, -Ryan ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: containing memory-consuming computations
Hi Herbert, It sounds like you're interested in running just one client computation at once? Hence you don't have a disambiguation problem -- if the total memory footprint crosses a threshold you know who to blame. At least this seems easier than needing a per-computation or per-IO-thread caps. By the way, the folks who implement Second Life did an interesting job of that -- they hacked Mono to be able to execute untrusted code with resource bounds. Cheers, -Ryan On Thu, Apr 19, 2012 at 6:45 AM, Herbert Valerio Riedel h...@gnu.org wrote: Hello GHC Devs, One issue that's been bothering me when writing Haskell programs meant to be long-running processes performing computations on external input-data in terms of an event/request-loop (think web-services, SQL-servers, or REPLs), that it is desirable to be able to limit resource-usage and be able to contain the effects of computations which exhausts the resource-limits (i.e. w/o crashing and burning the whole process) For the time-dimension, I'm already using functions such as System.Timeout.timeout which I can use to make sure that even a (forced) pure computation doesn't require (significantly) more wall-clock time than I expect it to. But I'm missing a similar facility for constraining the space-dimension. In some other languages such as C, I have (more or less) the ability to check for /local/ out-of-memory conditions (e.g. by checking the return value of e.g. malloc(3) for heap-allocations, or by handling an OOM exception), rollback the computation, and be able to skip to the next computation request (which hopefully requires less memory...) So, is there already any such facility provided by the GHC Platform I've missed so far? ...and if not, would such a memory-limiting facility be reconcilable with the GHC RTS architecture? Cheers, hvr -- ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Code review for new primop's CMM code?
Perhaps a question mark is more appropriate in the title. It is a code review I am seeking, not one on offer ;-). On Thu, Mar 29, 2012 at 12:56 AM, Ryan Newton rrnew...@gmail.com wrote: Hi all, In preparation for students working on concurrent data structures GSOC(s), I wanted to make sure they could count on CAS for array elements as well as IORefs. The following patch represents my first attempt: https://github.com/rrnewton/ghc/commit/18ed460be111b47a759486677960093d71eef386 It passes a simple test [Appendix 2 below], but I am very unsure as to whether the GC write barrier is correct. Could someone do a code-review on the following few lines of CMM: if (GET_INFO(arr) == stg_MUT_ARR_PTRS_CLEAN_info) { SET_HDR(arr, stg_MUT_ARR_PTRS_DIRTY_info, CCCS); len = StgMutArrPtrs_ptrs(arr); // The write barrier. We must write a byte into the mark table: I8[arr + SIZEOF_StgMutArrPtrs + WDS(len) + (ind MUT_ARR_PTRS_CARD_BITS )] = 1; } Thanks, -Ryan -- Appendix 1: First draft code CMM definition for casArray# --- stg_casArrayzh /* MutableArray# s a - Int# - a - a - State# s - (# State# s, Int#, a #) */ { W_ arr, p, ind, old, new, h, len; arr = R1; // anything else? ind = R2; old = R3; new = R4; p = arr + SIZEOF_StgMutArrPtrs + WDS(ind); (h) = foreign C cas(p, old, new) []; if (h != old) { // Failure, return what was there instead of 'old': RET_NP(1,h); } else { // Compare and Swap Succeeded: if (GET_INFO(arr) == stg_MUT_ARR_PTRS_CLEAN_info) { SET_HDR(arr, stg_MUT_ARR_PTRS_DIRTY_info, CCCS); len = StgMutArrPtrs_ptrs(arr); // The write barrier. We must write a byte into the mark table: I8[arr + SIZEOF_StgMutArrPtrs + WDS(len) + (ind MUT_ARR_PTRS_CARD_BITS )] = 1; } RET_NP(0,h); } } -- Appendix 2: Simple test file; when run it should print: --- -- Perform a CAS within a MutableArray# -- 1st try should succeed: (True,33) -- 2nd should fail: (False,44) -- Printing array: -- 33 33 33 44 33 -- Done. --- {-# Language MagicHash, UnboxedTuples #-} import GHC.IO import GHC.IORef import GHC.ST import GHC.STRef import GHC.Prim import GHC.Base import Data.Primitive.Array import Control.Monad -- -- | Write a value to the array at the given index: casArrayST :: MutableArray s a - Int - a - a - ST s (Bool, a) casArrayST (MutableArray arr#) (I# i#) old new = ST$ \s1# - case casArray# arr# i# old new s1# of (# s2#, x#, res #) - (# s2#, (x# ==# 0#, res) #) {-# NOINLINE mynum #-} mynum :: Int mynum = 33 main = do putStrLn Perform a CAS within a MutableArray# arr - newArray 5 mynum res - stToIO$ casArrayST arr 3 mynum 44 res2 - stToIO$ casArrayST arr 3 mynum 44 putStrLn$ 1st try should succeed: ++show res putStrLn$ 2nd should fail: ++show res2 putStrLn Printing array: forM_ [0..4] $ \ i - do x - readArray arr i putStr ( ++show x) putStrLn putStrLn Done. ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Code review for new primop's CMM code.
Hi all, In preparation for students working on concurrent data structures GSOC(s), I wanted to make sure they could count on CAS for array elements as well as IORefs. The following patch represents my first attempt: https://github.com/rrnewton/ghc/commit/18ed460be111b47a759486677960093d71eef386 It passes a simple test [Appendix 2 below], but I am very unsure as to whether the GC write barrier is correct. Could someone do a code-review on the following few lines of CMM: if (GET_INFO(arr) == stg_MUT_ARR_PTRS_CLEAN_info) { SET_HDR(arr, stg_MUT_ARR_PTRS_DIRTY_info, CCCS); len = StgMutArrPtrs_ptrs(arr); // The write barrier. We must write a byte into the mark table: I8[arr + SIZEOF_StgMutArrPtrs + WDS(len) + (ind MUT_ARR_PTRS_CARD_BITS )] = 1; } Thanks, -Ryan -- Appendix 1: First draft code CMM definition for casArray# --- stg_casArrayzh /* MutableArray# s a - Int# - a - a - State# s - (# State# s, Int#, a #) */ { W_ arr, p, ind, old, new, h, len; arr = R1; // anything else? ind = R2; old = R3; new = R4; p = arr + SIZEOF_StgMutArrPtrs + WDS(ind); (h) = foreign C cas(p, old, new) []; if (h != old) { // Failure, return what was there instead of 'old': RET_NP(1,h); } else { // Compare and Swap Succeeded: if (GET_INFO(arr) == stg_MUT_ARR_PTRS_CLEAN_info) { SET_HDR(arr, stg_MUT_ARR_PTRS_DIRTY_info, CCCS); len = StgMutArrPtrs_ptrs(arr); // The write barrier. We must write a byte into the mark table: I8[arr + SIZEOF_StgMutArrPtrs + WDS(len) + (ind MUT_ARR_PTRS_CARD_BITS )] = 1; } RET_NP(0,h); } } -- Appendix 2: Simple test file; when run it should print: --- -- Perform a CAS within a MutableArray# -- 1st try should succeed: (True,33) -- 2nd should fail: (False,44) -- Printing array: -- 33 33 33 44 33 -- Done. --- {-# Language MagicHash, UnboxedTuples #-} import GHC.IO import GHC.IORef import GHC.ST import GHC.STRef import GHC.Prim import GHC.Base import Data.Primitive.Array import Control.Monad -- -- | Write a value to the array at the given index: casArrayST :: MutableArray s a - Int - a - a - ST s (Bool, a) casArrayST (MutableArray arr#) (I# i#) old new = ST$ \s1# - case casArray# arr# i# old new s1# of (# s2#, x#, res #) - (# s2#, (x# ==# 0#, res) #) {-# NOINLINE mynum #-} mynum :: Int mynum = 33 main = do putStrLn Perform a CAS within a MutableArray# arr - newArray 5 mynum res - stToIO$ casArrayST arr 3 mynum 44 res2 - stToIO$ casArrayST arr 3 mynum 44 putStrLn$ 1st try should succeed: ++show res putStrLn$ 2nd should fail: ++show res2 putStrLn Printing array: forM_ [0..4] $ \ i - do x - readArray arr i putStr ( ++show x) putStrLn putStrLn Done. ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Abstracting over things that can be unpacked
+1 ! On Fri, Mar 2, 2012 at 7:40 PM, Johan Tibell johan.tib...@gmail.com wrote: Hi all, These ideas are still in very early stages. I present them here in hope of starting a discussion. (We discussed this quite a bit at last year's ICFP, I hope this slightly different take on the problem might lead to new ideas.) I think the next big step in Haskell performance is going to come from using better data representation in common types such as list, sets, and maps. Today these polymorphic data structures use both more memory and have more indirections than necessary, due to boxing of values. This boxing is due to the values being stored in fields of polymorphic type. First idea: instead of rejecting unpack pragmas on polymorphic fields, have them require a class constraint on the field types. Example: data UnboxPair a b = (Unbox a, Unbox b) = UP {-# UNPACK #-} !a {-# UNPACK #-} !b The Unbox type class would be similar in spirit to the class with the same name in the vector package, but be implemented internally by GHC. To a first approximation instances would only exist for fields that unpack to non-pointer types (e.g. Int.) Second idea: Introduce a new pragma, that has similar effect on representations as DPH's [::] vector type. This new pragma does deep unpacking, allowing for more types to be instances of the Unbox type e.g. pairs. Example: data T = C {-# UNWRAP #-} (a, b) If you squint a bit this pragma does the same as [: (a, b) :], except no vectors are involved. The final representation would be the unpacked representation of a and b, concatenated together (e.g. (Int, Int) would result in the field above being 128-bit wide on a 64-bit machine. The meta-idea tying these two ideas together is to allow for some abstraction over representation transforming pragmas, such as UNPACK. P.S. Before someone suggest using type families. Please read my email titled Avoiding O(n^2) instances when using associated data types to unpack values into constructors. Cheers, Johan ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: parallel build fixes for 7.4.2
By the way, it looks like the documentation for createDirectoryIfMissing doesn't mention anything about its safety under concurrent invocation. Looking at the code, it looks like it is safe (it handles the already exist exception): http://www.haskell.org/ghc/docs/latest/html/libraries/directory/src/System-Directory.html#createDirectoryIfMissing But maybe the docs could reflect that -Ryan On Fri, Feb 24, 2012 at 3:16 AM, Conrad Parker con...@metadecks.org wrote: Hi, recently we've been tweaking our internal build system at Tsuru to handle parallel builds of both cabal packages via 'cabal-sort --makefile' and our local code tree via 'ghc -M'. In addition to the recompilation checker fixes of #5878, the following would be great to have in 7.4.2: 1) http://hackage.haskell.org/trac/ghc/ticket/5891 -- the patch fixes a race condition in creating parent directories for built object files 2) master commit b6f94b5 Compile link .note section separately from main.c -- I think this is the patch that fixes link errors we've seen during parallel builds (via ghc -M) with 7.4.1, such as: /x/y/z.o: file not recognized: File truncated and: /usr/bin/ld: BFD (GNU Binutils for Ubuntu) 2.20.51-system.20100908 internal error, aborting at ../../bfd/merge.c line 872 in _bfd_merged_section_offset Will everything currently in master already be included in the next release or is it a separate branch? (If it's a separate branch I'll do some more testing to confirm that b6f94b5 is the patch that fixes the link error). Conrad. ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: ghc-cabal-Random
Just FYI it is possible to use OLD cabal binaries with the new GHC 7.4. No need to necessarily rebuild cabal-install with GHC 7.4. I do this all the time. Perhaps it's a bad practice ;-). -Ryan On Mon, Jan 2, 2012 at 8:07 AM, Brent Yorgey byor...@seas.upenn.edu wrote: On Mon, Jan 02, 2012 at 04:35:25PM +0400, Serge D. Mechveliani wrote: On Sun, Jan 01, 2012 at 07:51:39AM -0500, Ryan Newton wrote: I haven't entirely followed this and I see that it's been split over multiple threads. Did cabal install random actually fail for you under ghc-7.4.0.20111219? If so I'd love to know about it as the maintainer of the random package. (It seems to work for me for random-1.0.1.1.) cabal install random cannot run in my situation, because I have not cabal usable in the command line (I only have the Cabal library in the place where the ghc-7.4.0.20111219 libraries are installed). My idea is that having installed GHC, I use the GHC packages and, probably, do not need to install Cabal (why complicate things?, why force a DoCon user to install extra software?). It is not really forcing them to install extra software. Pretty much everyone these days will already have the Haskell Platform, which comes with cabal-install anyway. -Brent ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Records in Haskell
On Thu, Dec 29, 2011 at 12:00 PM, Simon Peyton-Jones simo...@microsoft.comwrote: | The lack of response, I believe, is just a lack of anyone who | can cut through all the noise and come up with some | practical way to move forward in one of the many possible | directions. You're right. There are a few tens of thousands of Haskell programmers now, right? I think a significant fraction of them would in fact appreciate the basic dot syntax and namespace-fixes that TDNR proposed. I fear that record-packages-as-libraries are unlikely to be used by a large number of people. Are they now? I love do-it-in-the-language as a principle, but I've watched it really impair the Scheme community with respect to many language features. (Recall your experiences, if you've had them, with homebrew Scheme OOP systems.) It seems hard for non-standard language extensions to gain wide use. Though, to be fair, Haskell's basic types have a history of being replaced by widely accepted alternatives (Vector, ByteString). In spite of its limitations, was there that much of a negative response to Simon's more recent proposal? http://hackage.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields +1! This is a great bang-for-the-buck proposal; it leverages the existing multiparameter type classes in a sensible way. I admit I'm a big fan of polymorphic extension. But I don't love it enough for it to impede progress! Regarding extension: In trying to read through all this material I don't see a lot of love for lacks constraints a la TRex. As one anecdote, I've been very pleased using Daan Leijen's scoped labels approach. I implemented it for my embedded stream processing DSL (WaveScript) and wrote 10K lines of application code with it. I never once ran into a bug resulting from shadowed/duplicate fields! Cheers, -Ryan But it is very telling that the vast majority of responses on http://www.reddit.com/r/haskell/comments/nph9l/records_stalled_again_leadership_needed/ were not about the subject (leadership) but rather on suggesting yet more, incompletely-specified solutions to the original problem. My modest attempt to build a consensus by articulating the simplest solution I could think of, manifestly failed. The trouble is that I just don't have the bandwidth (or, if I'm honest, the motivation) to drive this through to a conclusion. And if no one else does either, perhaps it isn't *that* important to anyone. That said, it clearly is *somewhat* important to a lot of people, so doing nothing isn't very satisfactory either. Usually I feel I know how to move forward, but here I don't. Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: ghc-cabal-Random
I haven't entirely followed this and I see that it's been split over multiple threads. Did cabal install random actually fail for you under ghc-7.4.0.20111219? If so I'd love to know about it as the maintainer of the random package. (It seems to work for me for random-1.0.1.1.) That said, I'm sure AC-random is a fine alternative, and there are many other packages on Hackage as well, including cryptographic strength ones (crypto-api, intel-aes, etc). Cheers, -Ryan On Sun, Jan 1, 2012 at 7:11 AM, Yitzchak Gale g...@sefer.org wrote: I wrote: Today, it is very unusual to use GHC by itself. To use Haskell, you install the Haskell Platform. That is GHC together with Cabal and a basic set of libraries. It is very easy to install. Wolfram Kahl wrote: However, since you are willing and able to test bleeding-edge versions of GHC, you need to be able to live without the platform, which typically catches up to GHC versions only within a couple of months. It's true that the platform provides a stable version of GHC, as needed by most people, not the bleeding edge. But even if you need GHC HEAD you would typically use cabal. Unless for some reason you need to shuffle around manually the various pieces that get built, follow trees of package dependencies manually, etc. There are some people who need to do it, and it is doable, though much more complicated and error-prone than just using cabal. Almost all Haskell software is expected to be installed using Cabal nowadays. It is important to know that people associate two packages with the name ``Cabal'' They are closely interconnected though. If you use the platform, that distinction is not very important. It just works. Life without cabal-install is not only possible, but also safer. I disagree with that. Manual processes are error-prone. With experience, you can learn how to do things totally manually, just like you can learn to build C projects manually without make, and with even more experience, you can learn to avoid all of the pitfalls. It's a good thing to know, but I wouldn't put it at first priority unless there's a special reason for it. (See also: http://www.vex.net/~trebla/haskell/sicp.xhtml ) The Cabal system is quite mature now, but still far from perfect. Problems can arise. Most of the problems are inherent to the DLL Hell that can occur in any separate compilation system, and some arise from the fact that Cabal's dependency solver needs improvement (that's a hard problem). That link is a detailed write-up of just about everything that can possibly go wrong. In my experience, none of that happens until you've been using an installation for a long time, or if you are very trigger-happy with upgrading packages to the latest version for no reason. Or if you're using a package with a huge amount of fast-changing dependencies, like one of the web frameworks. Even then, it's almost always easy enough just to re-install the platform to get a fresh install. Your next few compiles will take a few minutes longer as some packages get rebuilt, but that's about it. To avoid that altogether, I use cabal-dev. This allows me to build a package I am working on in a sandbox with just the dependencies it needs, tailored exactly for the needs of my specific package. Cabal-dev also makes it easy to experiment with how users will experience building my package. It's good to know all the intricacies of the build system, and what is happening beneath the surface if it gets lost. The linked article is a worthwhile read for that. Regards, Yitz ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Way to expose BLACKHOLES through an API?
Hi GHC users, When implementing certain concurrent systems-level software in Haskell it is good to be aware of all potentially blocking operations. Presently, blocking on an MVar is explicit (it only happens when you do a takeMVar), but blocking on a BLACKHOLE is implicit and can potentially happen anywhere. If there are known thunks where we, the programmers, know that contention might occur, would it be possible to create a variant of Control.Monad.Evaluate that allows us to construct non-blocking software: evaluate :: a - IO a evaluateNonblocking :: a - IO (Maybe a) It would simply return Nothing if the value is BLACKHOLE'd. Of course it may be helpful to also distinguish the evaluated and unevaluated states. Further, the above simple version allows data-races (it may become blackhole'd right after we evaluate). An extreme version would actively blackhole it to lock the thunk... but maybe that's overkill and there are some other good ideas out there. A mechanism like the proposed should, for example, allow us to consume just as much of a lazy Bytestring as has already been computed by a producer, WITHOUT blocking and waiting on that producer thread, or migrating the producer computation over to our own thread (blowing its cache). Thanks, -Ryan ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Way to expose BLACKHOLES through an API?
Jan voted for the explicit lockAndBlackhole version as safer. I realize that for the Bytestring example all you would want to gently consume what is already available is WHNF detection alone. In that scenario you don't want to evaluate anything, just consume what is already evaluated. I would propose that when you do want to explicitly and actively blackhole something that that call be non-blocking (i.e. if someone else has already blackhole'd you don't wait). So the state machine would go: tryAcquire x = case unsafeRTStatus x of Blackhole - return Nothing Unevaluated - do b - tryBlackhole x if b then return (Just x) else return Nothing Evaluated - return (Just x) It would simply return Nothing if the value is BLACKHOLE'd. Of course it may be helpful to also distinguish the evaluated and unevaluated states. Further, the above simple version allows data-races (it may become blackhole'd right after we evaluate). An extreme version would actively blackhole it to lock the thunk... but maybe that's overkill and there are some other good ideas out there. I'd submit that your latter suggestion is far safer: return Nothing unless we successfully blackhole the thunk or find that it's already been evaluated. We actually *know* the blocking behavior we'll get, and it's behavior we can't easily obtain through any other mechanism (eg we'd have to add multiple unsafe indirections through mutable cells into the lazy bytestring implementation to obtain the same behavior in any other way, and essentially write out the laziness longhand losing the benefits of indirection removal and so forth). A mechanism like the proposed should, for example, allow us to consume just as much of a lazy Bytestring as has already been computed by a producer, WITHOUT blocking and waiting on that producer thread, or migrating the producer computation over to our own thread (blowing its cache). For that you probably want WHNF-or-not detection as well (at least if you want to schedule streaming of computation. ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Making a small GHC installation
Also, I find UPX essential in this kind of situation. It can make self-decompressing executables without a noticable slowdown (in fact, a speedup on network drives!). Typically I see something like this: ghc:*54.6 MB* after 'strip': *33.1 MB* after UPX: *6.2* * MB* -Ryan On Tue, Oct 11, 2011 at 4:00 PM, Joachim Breitner m...@joachim-breitner.dewrote: Hi, Am Dienstag, den 11.10.2011, 11:02 -0700 schrieb Iavor Diatchki: The context is that I need to make a demo VM, which has a limited amount of space, and I'd like to have GHC installed on the system but the default GHC installation (~700MB) does not fit. The installation does not need to be complete---I don't need documentation, or profiling, or Template Haskell---and I only need to install a fairly limited set of libraries, just enough to build my project. I'd be happy to build a custom version of GHC, if that's the easiest way to achieve the goal. So, if you have experience doing something similar, or you know of what might be the best way to approach the problem, advice would be most welcome! The debian ghc package comes without profiling (in ghc-prof) and documentation (ghc-doc). I’d be happy to hear that someone actually profits from that split :-) Installed size is about 250MB. So also in terms of efforts it might be easiest to bootstrap a minimal Debian and install ghc on it. Greetings, Joachim PS: I’m a Debian Developer, so of course my advice is biased :-) -- Joachim nomeata Breitner m...@joachim-breitner.de | nome...@debian.org | GPG: 0x4743206C xmpp: nome...@joachim-breitner.de | http://www.joachim-breitner.de/ ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Two Proposals
Just anecdotally I remember we had this problem with Accelerate. Back when we were using it last Spring for some reason we were forced by the API to at least nominally go through lists on our way to the GPU -- which we sorely hoped were deforested! At times (and somewhat unpredictably), we'd be faced enormous execution times and memory footprints as the runtime tried to create gigantic lists for feeding to Accelerate. Other than that -- I like having a nice literal syntax for other types. But I'm not sure that I construct literals for Sets and IntMaps often enough to profit much... -Ryan On Tue, Oct 4, 2011 at 9:38 AM, Roman Leshchinskiy r...@cse.unsw.edu.auwrote: George Giorgidze wrote: This extension could also be used for giving data-parallel array literals instead of the special syntax used currently. Unfortunately, it couldn't. DPH array literals don't (and can't really) go through lists. In general, if we are going to overload list literals then forcing the desugaring to always go through lists seems wrong to me. There are plenty of data structures where that might result in a significant performance hit. Roman ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Records in Haskell
One benefit of TDNR is to replicate the discoverability of APIs that OO programming has - if x :: Foo then typing x. in an IDE gives you a list of things you can do with a Foo. (Obviously it's not a complete lis for various reasons, but it does allow the author of Foo and others to design discoverable APIs.) But add another import, and some of those APIs disappear! Because of the additional ambiguity? Shouldn't the IDE show all options even if the type checker requires a unique one? And of course, the language doesn't need to support TDNR in order for an IDE to use it (although the juxtaposition application syntax doesn't make the UI easy). I'm not sure if I'm saying the same thing as you, here, Ian, but my take is that it be nice for an IDE to solve this in a more general way that works for normal function application as well as x.f backwards function application. That is, the equivalent of the OOP IDE auto-complete should be to ask what set of functions apply to this value. The problem is just that the keystrokes order is awkward. Most people typing f x probably type 'f' first ;-). But it should be possible to do: x left-arrow left-arrow magic-keystroke To get the same set of functions as if you do (x. magic-keystroke), shouldn't it? (Maybe magic-keystroke inserts an implicit undefined, type checks, and then figures out the set of functions.) I started playing around with Leksah and scion/emacs (searching for what's the type of this expr support) and was a little disappointed that this functionality doesn't seem to exist yet. Or am I wrong and it exists somewhere? Cheers, -Ryan ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: With every new GHC release, also released any new versions of libraries
FYI, as of 7.2 random wasn't shipped with GHC, but it's been updated on hackage as well (1.0.1.0). The API is still the haskell98 API, but there are some important bug fixes in there. Cheers, -Ryan On Thu, Aug 25, 2011 at 8:57 AM, Johan Tibell johan.tib...@gmail.comwrote: On Thu, Aug 25, 2011 at 2:36 PM, Ian Lynagh ig...@earth.li wrote: On Thu, Aug 25, 2011 at 10:39:29AM +0200, Johan Tibell wrote: I suggest that with each GHC release the new library releases should be uploaded to Hackage. They normally are, but in this case I ran out of time before disappearing for 2 weeks. They're now uploaded. Sorry for any inconvenience. No problem at all. I just wanted to know if uploading them was standard procedure or not. ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Cheap and cheerful partial evaluation
Ah, and there's no core-haskell facility presently? Thanks. On Wed, Aug 24, 2011 at 12:14 AM, Edward Z. Yang ezy...@mit.edu wrote: Since most of GHC's optimizations occur on core, not the user-friendly frontend language, doing so would be probably be nontrivial (e.g. we'd want some sort of core to Haskell decompiler.) Edward Excerpts from Ryan Newton's message of Tue Aug 23 13:46:45 -0400 2011: Edward, On first glance at your email I misunderstood you as asking about using GHC's optimizer as a source-to-source operation (using GHC as an optimizer, retrieving partially evaluated Haskell code). That's not what you were asking for -- but is it possible? -Ryan P.S. One compiler that comes to mind that exposes this kind of thing nicely is Chez Scheme ( http://scheme.com/ ). In Chez you can get your hands on cp0 which does a source to source transform (aka compiler pass zero, after macro expansion), and could use cp0 to preprocess the source and then print it back out. On Mon, Aug 22, 2011 at 8:48 AM, Edward Z. Yang ezy...@mit.edu wrote: I think this ticket sums it up very nicely! Cheers, Edward Excerpts from Max Bolingbroke's message of Mon Aug 22 04:07:59 -0400 2011: On 21 August 2011 19:20, Edward Z. Yang ezy...@mit.edu wrote: And no sooner do I send this email do I realize we have 'inline' built-in, so I can probably experiment with this right now... You may be interested in my related ticket #5029: http://hackage.haskell.org/trac/ghc/ticket/5059 I don't think this is totally implausible but you have to be very careful with recursive functions. Max ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Cheap and cheerful partial evaluation
Edward, On first glance at your email I misunderstood you as asking about using GHC's optimizer as a source-to-source operation (using GHC as an optimizer, retrieving partially evaluated Haskell code). That's not what you were asking for -- but is it possible? -Ryan P.S. One compiler that comes to mind that exposes this kind of thing nicely is Chez Scheme ( http://scheme.com/ ). In Chez you can get your hands on cp0 which does a source to source transform (aka compiler pass zero, after macro expansion), and could use cp0 to preprocess the source and then print it back out. On Mon, Aug 22, 2011 at 8:48 AM, Edward Z. Yang ezy...@mit.edu wrote: I think this ticket sums it up very nicely! Cheers, Edward Excerpts from Max Bolingbroke's message of Mon Aug 22 04:07:59 -0400 2011: On 21 August 2011 19:20, Edward Z. Yang ezy...@mit.edu wrote: And no sooner do I send this email do I realize we have 'inline' built-in, so I can probably experiment with this right now... You may be interested in my related ticket #5029: http://hackage.haskell.org/trac/ghc/ticket/5059 I don't think this is totally implausible but you have to be very careful with recursive functions. Max ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Superclass defaults
My only input is that we have at least 2-3 (depending on whether the latter two are to be considered separate) hierarchies in want of refactoring: Functor/Applicative/Monad, Num and friends, and Monoid. Ideally any solution would solve the problem for all of them, but even if it doesn't (and only solves, say, Monad's case), I think it should be a requirement that it at least allows for the others to be solved as well in an incremental fashion afterwards (whether by 'upgrading' the by-then existing feature, or adding a new, orthogonal one). The undesirable scenario would be where you would have to change the world all over again a second time to resolve the remaining problems. Another place where this might help would be with the RandomGen/SplittableGen issue: http://hackage.haskell.org/trac/ghc/ticket/4314 If design goal 1 is met, then clients would not have to refactor their Random instance to match the new class factoring. -Ryan ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Experiencing libffi related problem with GHC HEAD on Ubuntu 11.04
Has anyone seen this one before? # libffi.so needs to be built with the correct soname. # NOTE: this builds libffi_convience.so with the incorrect # soname, but we don't need that anyway! cd libffi \ cp build/libtool build/libtool.orig; \ sed -e s/soname_spec=.*/soname_spec=libHSffi-ghc7.1.20110609.sohttp://libhsffi-ghc7.1.20110609.so// build/libtool.orig build/libtool cp: cannot stat `build/libtool': No such file or directory sed: can't read build/libtool.orig: No such file or directory make[2]: *** [libffi/stamp.ffi.configure] Error 2 make[1]: *** [all_libffi] Error 2 make[1]: Leaving directory `/home/newton/build/haskell2/ghc-validate' make: *** [all] Error 2 Thanks, -Ryan P.S. It's not new. I just did a git pull origin master ./sync-all pull origin master and got this error but I also got the same error on this machine back on June 9th. However, this package has built GHC HEAD in the past. It has the standard ubuntu libffi-dev and libffi5 packages installed. ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: [Haskell-cafe] Splittable random numbers
Hi cafe, I want to add the ability to use AES-NI instructions on Intel architectures to GHC. Mainly I'd like to do splittable random number generators based on AES as was suggested at the outset of this email. (I met Burton Smith last week and this topic came up.) I was just reading the below thread about the plugin architecture which got me thinking about what the right way to add AES-NI is. (Disregarding for a moment portability and the issue of where to test cpuid...) http://www.haskell.org/pipermail/glasgow-haskell-users/2011-January/019874.html The FFI is always an option. But after reading the first N pages I could come across from google I'm still not totally clear on whether unsafe foreign calls can happen simultaneously from separate Haskell threads (and with sufficiently low overhead for this purpose). I also ran across the phrase compiler primitive somewhere wrt GHC: http://hackage.haskell.org/trac/ghc/wiki/AddingNewPrimitiveOperations Is that the right way to go? Or would the compiler plugin mechanism possibly allowing doing this without modifying mainline GHC? Thanks, -Ryan On Fri, Nov 12, 2010 at 6:26 PM, wren ng thornton w...@freegeek.org wrote: On 11/12/10 5:33 AM, Richard Senington wrote: It does not give the results you would want. This may have something to do with picking good parameters for the mkLehmerTree function. For example, using a random setup, I just got these results result expected range 16.814 expected = 16.0 (1,31) 16.191 expected = 16.5 (1,32) 16.576 expected = 17.0 (1,33) 17.081 expected = 17.5 (1,34) 17.543 expected = 18.0 (1,35) Have you run any significance tests? I wouldn't be surprised to see +/-0.5 as within the bounds of expected randomness. I'm more worried about it seeming to be consistently on the -0.5 end of things, than I am about it not matching expectation (how many samples did you take again?). For small ranges like this, a consistent -0.5 (or +0.5) tends to indicate off-by-one errors in the generator. -- Live well, ~wren ___ Haskell-Cafe mailing list haskell-c...@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: [Haskell-cafe] Splittable random numbers
I'm not too familiar with all the Haskell API's for RNGs. This is the first time I've looked at CryptoRandomGen, but I can see the benefit of having a bytestring interface rather than the System.Random Int based one. Is there a reason that the AES implementation in the AES or crypto packages can't be ripped out and repackage in the way you would like? Cheers, -Ryan On Fri, Jan 21, 2011 at 6:11 PM, Thomas DuBuisson thomas.dubuis...@gmail.com wrote: Ryan, If you make an AES based RNG then consider making an instance for CryptoRandomGen (see DRBG [1] for example instances). Such an instance means you can use splitGen [2], which can split generators in the manner described in this thread. If you make the RNG match NIST SP 800-90 then feel free to send it to me for inclusion in the DRBG package, I've been meaning to make the block cipher based DRBG for a while now. Finally, any implementation of AES (using NI or not) could probably go in its own package or a cipher-specific package like CryptoCipher[3]. Its a shame we don't have an AES implementation on Hackage that 1) exposes the fundamental block interface instead of some higher-level wrapping and 2) isn't tied to a large library. Cheers, Thomas [1] http://hackage.haskell.org/package/DRBG and http://hackage.haskell.org/packages/archive/DRBG/0.1.2/doc/html/src/Crypto-Random-DRBG.html#HmacDRBG [2] http://hackage.haskell.org/packages/archive/crypto-api/0.3.1/doc/html/Crypto-Random.html#v:splitGen [3] http://hackage.haskell.org/package/cryptocipher On Fri, Jan 21, 2011 at 2:19 PM, Ryan Newton rrnew...@gmail.com wrote: Hi cafe, I want to add the ability to use AES-NI instructions on Intel architectures to GHC. Mainly I'd like to do splittable random number generators based on AES as was suggested at the outset of this email. (I met Burton Smith last week and this topic came up.) I was just reading the below thread about the plugin architecture which got me thinking about what the right way to add AES-NI is. (Disregarding for a moment portability and the issue of where to test cpuid...) http://www.haskell.org/pipermail/glasgow-haskell-users/2011-January/019874.html The FFI is always an option. But after reading the first N pages I could come across from google I'm still not totally clear on whether unsafe foreign calls can happen simultaneously from separate Haskell threads (and with sufficiently low overhead for this purpose). I also ran across the phrase compiler primitive somewhere wrt GHC: http://hackage.haskell.org/trac/ghc/wiki/AddingNewPrimitiveOperations Is that the right way to go? Or would the compiler plugin mechanism possibly allowing doing this without modifying mainline GHC? Thanks, -Ryan On Fri, Nov 12, 2010 at 6:26 PM, wren ng thornton w...@freegeek.org wrote: On 11/12/10 5:33 AM, Richard Senington wrote: It does not give the results you would want. This may have something to do with picking good parameters for the mkLehmerTree function. For example, using a random setup, I just got these results result expected range 16.814 expected = 16.0 (1,31) 16.191 expected = 16.5 (1,32) 16.576 expected = 17.0 (1,33) 17.081 expected = 17.5 (1,34) 17.543 expected = 18.0 (1,35) Have you run any significance tests? I wouldn't be surprised to see +/-0.5 as within the bounds of expected randomness. I'm more worried about it seeming to be consistently on the -0.5 end of things, than I am about it not matching expectation (how many samples did you take again?). For small ranges like this, a consistent -0.5 (or +0.5) tends to indicate off-by-one errors in the generator. -- Live well, ~wren ___ Haskell-Cafe mailing list haskell-c...@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Terminate unused worker threads
Hi all, Apologies for commenting before understanding Capability.c very well. But it seems that this file uses locking quite heavily. Has there been an analysis of whether atomic memory ops and lock free algorithms could play any role here? Simon mentioned keeping track of the number of items in the queue so as not to have to traverse it while holding the lock. That, for example, seems that it could be accomplished with an atomically modified counter. Cheers, -Ryan On Wed, Nov 17, 2010 at 10:25 AM, Edward Z. Yang ezy...@mit.edu wrote: Excerpts from Simon Marlow's message of Wed Nov 17 06:15:46 -0500 2010: I suggest keeping track of the number of items in the queue. Ok, I added a spare_workers_no field to the Capability struct. So I think the main thing missing is a call to workerTaskStop(). Added. It would be really nice if we could arrange that in the case where we have too many spare workers, the extra workers exit via the same route, but I suspect that might not be easy. Do you mean, spare workers that have useful tasks to do? Edward ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: ghc-6.12.3 package base - build failure on ubuntu10.10 (works with 10.4)
Usually I get undefined references wrt iconv (similar to the below post) not a failure at configure time. http://www.mail-archive.com/haskell-c...@haskell.org/msg77195.html Although I have seen seemingly spurious Missing C libraries errors before (for example, when I tried building on windows under cygwin and I got that same message regarding libs rt and dl -- which were indeed present in /usr/lib!). I am going to upgrade to ubuntu 10.10 today so I'll let you know if I run into the same thing. -Ryan On Tue, Oct 19, 2010 at 9:41 AM, Simon Marlow marlo...@gmail.com wrote: On 15/10/2010 16:35, Mischa Dieterle wrote: Hi, I'm getting a build failure when I try to compile ghc-6.12 from the http://darcs.haskell.org/ghc-6.12 repo. ghc-cabal: Missing dependency on a foreign library: * Missing C library: iconv This problem can usually be solved by installing the system package that provides this library (you may need the -dev version). If the library is already installed but in a non-standard location then you can use the flags --extra-include-dirs= and --extra-lib-dirs= to specify where it is. make[1]: *** [libraries/base/dist-install/package-data.mk] Error 1 make: *** [all] Error 2 I'm using a ubuntu 10.10-64 bit system with libc-bin and libc-dev-bin installed. I don't think the problem is a missing iconv library. Maybe the problem is in the generated configure script by the new autoconf version (autoconf version is 2.67). I haven't encountered this problem, but I'm not using Ubuntu 10.10 yet. Please report the bug on the Trac so we don't forget about it. Cheers, Simon Maybe someone can help. Cheers, Mischa Stderr output of ./configure on base module is: ./configure: line 8709: ${ac_cv_search_iconv_t_cd_ __cd___iconv_open_ __iconv_cd_NULL_NULL_NULL_NULL__ __iconv_close_cd__+set}: bad substitution ./configure: line 8735: ac_cv_search_iconv_t_cd_: command not found ./configure: line 8736: __cd___iconv_open_: command not found ./configure: line 8737: __iconv_cd_NULL_NULL_NULL_NULL__: command not found ./configure: line 8742: ${ac_cv_search_iconv_t_cd_ __cd___iconv_open_ __iconv_cd_NULL_NULL_NULL_NULL__ __iconv_close_cd__+set}: bad substitution ./configure: line 8742: ${ac_cv_search_iconv_t_cd_ __cd___iconv_open_ __iconv_cd_NULL_NULL_NULL_NULL__ __iconv_close_cd__+set}: bad substitution ./configure: line 8746: ${ac_cv_search_iconv_t_cd_ __cd___iconv_open_ __iconv_cd_NULL_NULL_NULL_NULL__ __iconv_close_cd__+set}: bad substitution ./configure: line 8746: ac_cv_search_iconv_t_cd_: command not found ./configure: line 8747: __cd___iconv_open_: command not found ./configure: line 8748: __iconv_cd_NULL_NULL_NULL_NULL__: command not found ./configure: line 8751: __cd___iconv_open_: command not found ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users