Re: Build system idea
Roman Leshchinskiy wrote: On 12/08/2008, at 20:11, Simon Marlow wrote: - Extract the code from Cabal that generates Makefiles, and treat it as part of the GHC build system. Rather than generating a Makefile complete with build rules, we generate a Makefile that just has the package-specific metadata (list of modules, etc.), and put the code to actually build the package in the GHC build system. Sounds good. It would be nice if the .cabal parser from Cabal could be made into a separate, stable library which ghc (and nhc?) could use. This makes me wonder, though. Wouldn't this model make more sense for Cabal in general than the current approach of duplicating the functionality of autoconf, make and other stuff? If it works ghc, it ought to work for other projects, too. Cabal as a preprocessor seems much more attractive to me than as a universal build system. So packages would be required to provide their own build system? That sounds like it would make it a lot harder for people to just create a package that others can use. The ease of making a Cabal package has I think a lot to do with the wealth of software available on Hackage. GHC is a special case: we already need a build system for other reasons. It was a design decision early on with Cabal that we didn't want to rely on the target system having a Unix-like build environment. You might disagree with this, but it certainly has some value: a Windows user can download GHC and immediately start building and installing external packages without having to install Cygwin. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Version control systems
Norman Ramsey wrote: I also see repeatedly that the distinction between the build system and packaging system is blurry: both have to know about build targets, dependencies, and so on. At the time of the wonderful GHC Hackathon in Portland, where the GHC API was first introduced to the public, I urged Simon PJ to consider taking ghc --make and generalising it to support other languages. I still think this would be a good project. I don't want to speak for those involved, but I believe this is what the "make-like dependency framework for Cabal" SoC project is doing: http://vezzosi.blogspot.com/2008/06/my-summer-of-code-project-dependency.html Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Version control systems
Matthias Kilian wrote: I mean the GHC-specific template used for building the Makefile (Distribution/Simple/GHC/Makefile.in) and the function `makefile` in Distribution/Simple/GHC.hs (this function even spills out some some make rules in addition to what's in Makefile.in, which looks very wrong to me). Yes, it relies only on the Cabal metadata, but the output is a Makefile only useful for building GHC. Ok, this statement is plainly not true, since I can use 'cabal makefile' to build any package outside of the GHC build tree. So perhaps I've misunderstood your point? It'd be better to be able to run $ ./Setup mkmetadata -prefix foo- which just produces some simple variable declarations like foo-impl-ghc-build-depends = rts foo-impl-ghc-exposed-modules = Data.Generics Data.Generics.Aliases ... ... foo-exposed-modules = Control.Applicative Control.Arrow ... ... foo-c-sources = cbits/PrelIOUtils.c cbits/WCsubst.c ... ... foo-windows-extra-libaries = wsock32 msvcrt kernel32 user32 shell32 foo-extensions = CPP foo-ghc-options = -package-name base foo-nhc98-options = -H4M -K3M Yes, we could use this to implement GHC's build system. It's somewhat similar to the scheme I suggested in the other thread, but more generic. I'd be completely happy to do it this way if the functionality would be useful to others outside GHC too. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: GHC project blog? (Re: Version control systems)
Claus Reinke wrote: Perhaps it would be useful for GHC HQ to have a GHC project blog, Actually we have talked about doing that, and it's highly likely we'll set one up in due course. I think it's worth letting the current discussion(s) run their course and then we'll have a set of concrete decisions to act upon, one of which will probably be to set up a blog so that GHC devs can communicate what they're up to. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Build system idea
Roman Leshchinskiy wrote: Of course there should be a standard build system for simple packages. It could be part of Cabal or a separate tool (for which Cabal could, again, act as a preprocessor). GHC is a special case: we already need a build system for other reasons. I agree. I just don't think that adding a full-fledged build system to Cabal is the solution. In my experience, huge monolithic tools which try to do everything never work well. I much prefer small, modular tools. A Haskell-based build system is an interesting project but why does it have to be a part of Cabal? Hmm, but you said above "there should be a standard build system for simple packages. It could be part of Cabal...". Cabal has two parts: some generic infrastructure, and a "simple" build system (under Distribution.Simple) that suffices for most packages. We distribute them together only because it's convenient; you don't have to use the simple build system if you don't want to. I think perhaps you're objecting to the fact that the "simple" build system isn't so simple, and we keep adding more functionality to it. This is true, but the alternative - forcing some packages to provide their own build system - seems worse to me. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Build system idea
Roman Leshchinskiy wrote: But that is precisely my (other) point. A lot of that work is really unnecessary and could be done by Cabal since it only or mostly depends on the package information. Instead, it is implemented somewhere in Distribution.Simple and not really usable from the outside. For instance, a lot of the functionality of setup sdist, setup register and so on could be implemented generically and used by a make-based build system as well. That's exactly what I'm proposing we do in GHC: re-use Cabal's setup register and some of the other parts of the simple build system in a make-based build system for packages. It might require a bit of refactoring of Cabal, but I don't expect it to be a major upheaval at all. I think what you're proposing is mostly a matter of abstracting parts of Cabal with cleaner and more modular APIs, which is absolutely a good thing, but doesn't require a fundamental redesign. The tight coupling and lack of separation between Cabal's generic parts and the simple build system is somewhat accidental (lazy implementors :-), and is actually a lot better than it used to be thanks to the work Duncan has put in. I'm sure it'll improve further over time. The other part of your complaint is that the BuildInfo is in the .cabal file along with the PackageDescription (the types are pretty well separated internally). Again I don't think there's anything fundamental here, and in fact some packages have separate .buildinfo files. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Version control systems
Duncan Coutts wrote: Turns out that the reason for slow darcs whatsnew is ghc bug #2093 http://hackage.haskell.org/trac/ghc/ticket/2093 because getSymbolicLinkStatus is broken on 32bit systems in 6.8.2 it means that the 'stat' optimisation does not work so darcs has to read the actual contents of many files. Obviously that's very slow, especially over nfs. That explains why it worked for me in 0.2 seconds but for you took several seconds user time and (even more real time due to nfs). Yes, I was aware of the #2093 problem (someone else pointed it out to me earlier), but it's not the cause of the slow whatsnew I'm seeing: my darcs is compiled with 6.8.3. ~/darcs/ghc-testing/testsuite-hashed > darcs +RTS --info [("GHC RTS", "Yes") ,("GHC version", "6.8.3") ,("RTS way", "rts_thr") ,("Host platform", "x86_64-unknown-linux") ,("Build platform", "x86_64-unknown-linux") ,("Target platform", "x86_64-unknown-linux") ,("Compiler unregisterised", "NO") ,("Tables next to code", "YES") ] ~/darcs/ghc-testing/testsuite-hashed > time darcs wh No changes! [2]15793 exit 1 darcs wh 21.35s real 9.56s user 4.28s system 64% darcs wh ~/darcs/ghc-testing/testsuite-hashed > darcs --version 2.0.1rc2 (2.0.1rc2 (+ -1 patch)) ~/darcs/ghc-testing/testsuite-hashed > darcs query repo Type: darcs Format: hashed Root: /home/simonmar/darcs-all/work/ghc-testing/testsuite-hashed Pristine: HashedPristine Cache: thisrepo:/home/simonmar/darcs-all/work/ghc-testing/testsuite-hashed boringfile Pref: .darcs-boring Default Remote: /home/simonmar/darcs-all/work/ghc-testing/testsuite Num Patches: 2834 It's better on the darcs-2 version of the repo: ~/darcs/ghc-testing/testsuite-hashed2 > darcs query repo Type: darcs Format: hashed, darcs-2 Root: /home/simonmar/darcs-all/work/ghc-testing/testsuite-hashed2 Pristine: HashedPristine Cache: thisrepo:/home/simonmar/darcs-all/work/ghc-testing/testsuite-hashed2 Num Patches: 2834 ~/darcs/ghc-testing/testsuite-hashed2 > time darcs wh No changes! [2]15824 exit 1 darcs wh 3.69s real 1.08s user 0.53s system 43% darcs wh Better, but still a factor of ~4 slower than on the darcs-1 repo. If you were using http://darcs.haskell.org/ghc-hashedrepo/ then there's a further explanation. According to the darcs devs that repo is: "in some weird intermediate (not final) hashed format that doesn't keep (original) filesizes in filenames. So in effect, it's running like with --ignore-times still" Nope, I'm not using that repo, these were ones I created freshly yesterday. I will try building a fresh darcs to see if that helps. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Version control systems
Manuel M T Chakravarty wrote: From what you are saying, it seems that one "advantage" of git (in-place branch switching) is not going to be useful to GHC in any case (because we use nested repositories). As far as I can tell, in-place branches are not a lot of use to us compared to just having separate checkouts for each local branch. For one thing, having separate source trees lets you keep multiple builds, whereas with in-place branches you can only have one build at a time, and switching branches probably requires a complete rebuild. However, I think I am convinced that using in-place branches for the master repo makes sense. That way we don't need to publish the names of new branches when we make them, and everyone can easily see which branches of GHC are available from the main repo. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Version control systems
Duncan Coutts wrote: On Fri, 2008-08-15 at 15:12 +0100, Ian Lynagh wrote: On Fri, Aug 15, 2008 at 11:12:20AM +1000, Manuel M T Chakravarty wrote: Moreover, as I wrote a few times before, some reasons for switching in the first place are invalidated by not having the core libraries in git, too. For example, one complaint about darcs is that it either doesn't build (on the Sun Solaris T1 and T2 machines) I don't remember seeing this mentioned before, and googling for "Solaris T1" darcs doesn't find anything. That's probably because there entire world are probably only two T1/T2 machines that people are using to run ghc. :-) One of them is at UNSW and the other was recently donated by Sun to the community and is just about to go online at Chalmers. What goes wrong? I'd expect darcs to build anywhere GHC does. So would I usually, though I've had to turn down cc flags to get darcs to build on ia64 before (SHA1.hs generates enormous register pressure). We should really use a C implementation of SHA1, the Haskell version isn't buying us anything beyond being a stress test of the register allocator. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: "dataflow rewriting engine"
Manuel M T Chakravarty wrote: Deborah Goldsmith: Has there been any thought about working with the LLVM project? I didn't find anything on the wiki along those lines. I have only had a rather brief look at LLVM, but my understanding at the moment is that LLVM would not be able to support one of GHC's current code layout optimisations. More precisely, with LLVM, it would not be possible to enforce that the meta data for a closure is placed right before (in terms of layout in the address space) the code executing the "eval" method of that same closure. GHC uses that to have the closure code pointer point directly to the "eval" code (and hence also by an appropriate offset) to the various fields of the meta data. If that layout cannot be ensured, GHC needs to take one more indirection to execute "evals" (which is a very frequent operation) - this is what an unregistered build does btw. However, I am not convinced that this layout optimisation is really gaining that much extra performance these days. In particular, since dynamic pointer tagging, very short running "evals" (for which the extra indirection incurs the largest overhead) have become less frequent. Even if there is a slight performance regression, I think, it would be worthwhile to consider giving up on the described layout constraint. It is the Last Quirk that keeps GHC from using standard compiler back-ends (such as LLVM), and I suspect, it is not worth it anymore. When we discussed this last, Simon Marlow planned to run benchmarks to determine how much performance the layout optimisation gains us these days. Simon, did you ever get around to that? I didn't get around to benchmarking it, but since the layout optimisation is easily switched off (it's called tablesNextToCode inside GHC) there's really nothing stopping someone from building a backend that doesn't rely on it. Everything works without this optimisation, including GHCi, the debugger, and the FFI. My guess is you'd pay a few percent on average for not doing it. You're quite right that pointer tagging makes it less attractive, but like most optimisations there are programs that fall outside the common case. Programs that do a lot of thunk evals will suffer the most. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Build system idea
John Meacham wrote: unfortunately the cabal approach doesn't work. note, I am not saying a declarative configuration manager won't work. in fact, I have sketched a design for one on occasion. but cabal's particular choices are broken. It is treading the same waters that made 'imake' fail. the ideas of forwards and backwards compatability are _the_ defining features of a configuration manager. Think about this, I can take my old sunsite CD, burned _ten years_ ago and take the unchanged tarballs off that CD and ./configure && make and in general most will work. many were written before linux even existed, many were written with non gcc compilers, yet they work today. The cabal way wasn't able to handle a single release of ghc and keep forwards or backwards compatability. That any project ever had to be changed to use the flag 'split-base' is a travesty. What about all the projects on burnt cds or that don't have someone to update them? 20 years from now when we are all using 'fhc' (Fred's Haskell Compiler) will we still have this reference to 'split-base' in our cabal files? how many more flags will have accumulated by then? Sure it's declarative, but in a language that doesn't make sense without the rule-book. autoconf tests things like 'does a library named foo exist and export bar'. 'is char signed or unsigned on the target system'. those are declarative statement and have a defined meaning through all time. (though, implemented in a pretty ugly imperative way) That is what allows autoconfed packages to be compiled by compilers on systems that were never dreamed of when the packages were written. The important thing about Cabal's way of specifying dependencies is that they can be made sound with not much difficulty. If I say that my package depends on base==3.0 and network==1.0, then I can guarantee that as long as those dependencies are present then my package will build. ("but but but..." I hear you say - don't touch that keyboard yet!) Suppose you used autoconf tests instead. You might happen to know that Network.Socket.blah was added at some point and write a test for that, but alas if you didn't also write a test for Network.Socket.foo (which your code uses but ends up getting removed in network-1.1) then your code breaks. Autoconf doesn't help you make your configuration sound, and you get no prior guarantee that your code will build. Now, Cabal's dependencies have the well-known problem that they're exceptionally brittle, because they either overspecify or underspecify, and it's not possible to get it "just right". On the other hand, autoconf configurations tend to underspecify dependencies, because you typically only write an autoconf test for something that you know has changed in the past - you don't know what's going to change in the future, so you usually just hope for the best. For Cabal I can ask the question "if I modify the API of package P, which other packages might be broken as a result?", but I can't do that with autoconf. Both systems are flawed, but neither fundamentally. For Cabal I think it would be interesting to look into using more precise dependencies (module.identifier::type, rather than package-version) and have them auto-generated. But this has difficult implications: implementing cabal-install's installation plans becomes much harder, for example. So I accept that we do not yet cover the range of configuration choices that are needed by the more complex packages (cf darcs), but I think that we can and that the approach is basically sound. The fact that we can automatically generate distro packages for hundreds of packages is not insignificant. This is just not possible with the autoconf approach. This is just utterly untrue. autoconfed packages that generate rpms, debs, etc are quite common. The only reason cabal can autogenerate distro packages for so many is that many interesting or hard ones just _arn't possible with cabal at all_. Exactly! Cabal is designed so that a distro packager can write a program that takes a Cabal package and generates a distro package for their distro. It has to do distro-specific stuff, but it doesn't typically need to do package-specific stuff. To generate a distro package from an autoconf package either the package author has to include support for that distro, or a distro packager has to write specific support for that package. There's no way to do generic autoconf->distro package generation, like there is with Cabal. Yes this means that Cabal is less general than autoconf. It was quite a revelation when we discovered this during the design of Cabal - originally we were going to have everything done programmatically in the Setup.hs file, but then we realised that having the package configuration available *as data* gave us a lot more scope for automation, albeit at the expense of some generality. That's the tradeoff - but there's still nothing stopping you from using aut
Re: Build system idea
Roman Leshchinskiy wrote: On 28/08/2008, at 23:59, Simon Marlow wrote: The important thing about Cabal's way of specifying dependencies is that they can be made sound with not much difficulty. If I say that my package depends on base==3.0 and network==1.0, then I can guarantee that as long as those dependencies are present then my package will build. ("but but but..." I hear you say - don't touch that keyboard yet!) Suppose you used autoconf tests instead. You might happen to know that Network.Socket.blah was added at some point and write a test for that, but alas if you didn't also write a test for Network.Socket.foo (which your code uses but ends up getting removed in network-1.1) then your code breaks. Autoconf doesn't help you make your configuration sound, and you get no prior guarantee that your code will build. Cabal doesn't give this guarantee, either, since it allows you to depend on just network or on network>x. Indeed. That's why I was careful not to say that Cabal gives you the guarantee, only that it's easy to achieve it. Both systems are flawed, but neither fundamentally. For Cabal I think it would be interesting to look into using more precise dependencies (module.identifier::type, rather than package-version) and have them auto-generated. But this has difficult implications: implementing cabal-install's installation plans becomes much harder, for example. Interesting. From our previous discussion I got the impression that you wouldn't like something like this. :-) Sorry for giving that impression. Yes I'd like to solve the problems that Cabal dependencies have, but I don't want the solution to be too costly - first-class interfaces seem too heavyweight to me. But I do agree with most of the arguments you gave in their favour. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Build system idea
John Meacham wrote: On Thu, Aug 28, 2008 at 02:59:16PM +0100, Simon Marlow wrote: The important thing about Cabal's way of specifying dependencies is that they can be made sound with not much difficulty. If I say that my package depends on base==3.0 and network==1.0, then I can guarantee that as long as those dependencies are present then my package will build. ("but but but..." I hear you say - don't touch that keyboard yet!) I can easily achieve this with autoconf or even nothing, I can simply do a test to see if a system is running fedora core 9 using ghc 6.8.2 and be assured that my package will build properly. But this misses the entire point, I want my package to build not on my exact system, I want it to build on _other_ peoples systems. People running with compilers and libraries and on operating systems I never heard of. But you can only do that by carefully enumerating all the dependencies of your code. autoconf doesn't help you do that - you end up underspecifying the dependencies. Cabal makes you overspecify. It's a soundness/completeness thing: Cabal is sound(*1), autoconf is complete(*2). You complain that Cabal is incomplete and I complain that autoconf is unsound. I'd like to make Cabal's dependency specs more complete, but I don't want to make it unsound. (*1) as long as you specify dependencies with both upper and lower bounds (*2) as long as you don't overspecify dependencies I'd be interested in discussing how to improve Cabal's dependency specifications, if you have any thoughts on that. Again, I would like to see this as another option. I think there are interesting ideas in cabal about configuration management. But there needs to be room for alternates including old standby's like autoconf autoconf isn't suitable as a replacement for Cabal's dependency specifications, because it doesn't specify dependencies. I couldn't use an autoconf-configured package with cabal-install, for exmaple. To generate a distro package from an autoconf package either the package author has to include support for that distro, or a distro packager has to write specific support for that package. There's no way to do generic autoconf->distro package generation, like there is with Cabal. In cabal you only get it because you convinced the cabal people to put in code to support your distro. Which isn't much different than asking the package manager too. False! All of the distro packaging tools for Cabal are separate entities build using the Cabal library. and there are many automatic package managers for autoconf style packages. http://www.toastball.net/toast/ is a good one, it even downloads dependencies from freshmeat when needed. in fact, your projects can probably be auto installed by 'toast projectname' and you didn't even know it! As I understand it, toast doesn't download and build dependencies, you have to know what the dependencies are. (maybe I'm wrong, but that's the impression I got from looking at the docs, and if it *does* know about dependencies, I'd like to know how). http://encap.org/ - one I use on pretty much all my systems. since it is distro independent. Again, dependencies are not tracked automatically, you (or someone else) have to specify them by hand. That's the tradeoff - but there's still nothing stopping you from using autoconf and your own build system instead if you need to! But it is a false tradeoff. the only reason one needs to make that tradeoff is because cabals design doesn't allow the useful ability to mix-n-match parts of it. I would prefer to see cabal improved so I _can_ use its metadata format, its configuration manager for simple projects, autoconf's for more complex ones (with full knowledge of the tradeoffs) and without jumping through hoops. No, it is a tradeoff. We want packages on Hackage to be automatically installable by cabal-install, for one thing. That means they have to say what their dependencies are. The fact is that it _is_ a big deal to replace cabal is the main issue I have. switching involves changing your build system completely. you can't replace just parts of it easily. Or integrate cabal from the bottom up rather than the top down. And it wants to be the _one true_ build system in your project. The counterexample again is the GHC build system, we integrate make and Cabal and autoconf, and we're planning to do more of it with make. Have you thought about how to change Cabal to do what you want? It's only code, after all :-) I'd like to see a standardized meta-info format for just haskell libraries, based on the current cabal format without the cabal specific build information. (this is what jhc uses, and franchise too I think) Just like the 'lsm' linux software map files. Pr
Re: Build system idea
Brandon S. Allbery KF8NH wrote: On 2008 Aug 28, at 22:00, Sterling Clover wrote: We do have, although not with easy access, an additional declarative layer "built in" 90% of the time as configuration as type signature. Sure? I think it's easier than you think: someone's already written code to extract the information from .hi files (and indeed ghc will dump it for you: ghc --dump-iface foo.hi). In theory there could be a master dictionary of these hosted on hackage, collected from each package's own dictionary, and a given package's dependencies could be computed with high accuracy from it. It's a good idea, but conditional compilation makes it quite a bit harder. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Where STM is unstable at the moment, and how we can fix it
Sterling Clover wrote: This email is inspired by the discussion here: http://hackage.haskell.org/trac/ghc/ticket/2401 As the ticket discusses, unsafeIOToSTM is, unlike unsafePerformIO or unsafeInterleaveIO, genuinely completely unsafe in that there is no way to use it such that a segfault or deadlock is not at least somewhat encouraged. The code attached to the ticket creates a deadlock solely through using it to write to stdout. But, for the same reason that unsafeIOToSTM is unstable, unsafeInterleaveIO now is very unstable as well -- conceivably, data generated from functions with lazy IO (including those in the prelude) could cause deadlocks within STM, and even segfaults. In summary, a "validation" step is performed on all threads inside atomically blocks during garbage collection. This validation step will, on encountering invalid threads (i.e. ones which should be rolled back) immediately kill them dead and retry. This is different than the implementation described in the STM paper, where rollbacks only occur on commit. However, it does add a measure of efficiency. Its not just an efficiency trick, in fact. The validation step is absolutely necessary for correctness. The problem is that a transaction may have seen an inconsistent view of memory, and as a result it may have gone into an infinite loop; the only way to catch and recover from this situation is to validate at regular intervals, say before a GC (this suffers from the problem that the transaction has to be allocating in order to be stopped, but that's another matter). e.g. the code might be something like atomically $ do a <- readTVar ta b <- readTVar tb if a == b then loop else return () now we might know that a is never equal to b under normal conditions: all the transactions in the program satisfy the invariant. However, since we use optimistic concurrency, it might be the case that this thread sees an inconsistent view of memory in which a==b. The case would normally be caught at commit time, but this thread isn't going to commit: it goes into an infinite loop instead. As Simon M. notes, the obvious solution would be to turn rollbacks into regular exceptions, but this would open a number of cans of worms. A start, though not sufficient, would be for stm validation to respect blocked status -- not to block on it, obviously, but simply to refuse to rollback a transaction within it. That wouldn't be correct, because the thread might be in an infinite loop inside a block. However, it would probably work in the cases you're interested in, so I wouldn't object to a patch that implemented this workaround for the time being. I do agree that we have a problem here, and I'll re-open the ticket (sorry for leaving it closed). I think raising an (asynchronous) exception is the right solution. We have to make sure the exception cannot be caught by an STM catch, but I think that's do-able. However, another problem we have is that when the IO system re-raises the exception, it'll be raised as a synchronous exception rather than an asynchronous exception. I've just spent an hour or so talking this over here with Simon PJ and we have some ideas for fixing it, I'll try to write it up in a ticket later. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
*BSD GHC hackers needed...
While perusing the tickets in the 6.10.1 milestone I spotted 3 that look to be FreeBSD-specific: 2476internal error: awaitEvent: descriptor out of range 2502segfault with GHC.Handle.fdToHandle' 2511unix package doesnt load in ghci on freebsd/amd64 It's unlikely that we'll get to these in time for 6.10.1. Any FreeBSD hackers out there want to take a look? Help/advice is available as usual. There's also one NetBSD-specific ticket: 2351NetBSD defines ELF_ST_TYPE (that one is easy), and one that I think applies to all the *BSDs: 2063Breackage on OpenBSD due to mmap remap (actually the latter one is a top priority, because without it GHCi can't work, so we need it fixed for 6.10.1). Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Making GHC work on BSD
Donn Cave wrote: | ... and one that I think applies to all the *BSDs: | | 2063Breackage on OpenBSD due to mmap remap | | (actually the latter one is a top priority, because without it GHCi can't | work, so we need it fixed for 6.10.1). Is this a change from 6.8.3? NetBSD currently provides 6.8.3 as an optional package for NetBSD/i386 4.0, with ghci included and without any mmap patches as far as I know. It was also working for me on NetBSD/amd64 (which is the platform that would actually need MAP_32BIT?) Yes, the code changed in the HEAD, and currently uses mremap(), which only exists on Linux. We need similar hacks that were done in 6.8.3 to get the BSDs to work, that is to allocate memory from some predefined address in the lower 2Gb of the address space. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Making GHC work on BSD
Matthias Kilian wrote: On Tue, Sep 09, 2008 at 03:00:59PM +0100, Simon Marlow wrote: Is this a change from 6.8.3? NetBSD currently provides 6.8.3 as an optional package for NetBSD/i386 4.0, with ghci included and without any mmap patches as far as I know. It was also working for me on NetBSD/amd64 (which is the platform that would actually need MAP_32BIT?) Yes, the code changed in the HEAD, and currently uses mremap(), which only exists on Linux. We need similar hacks that were done in 6.8.3 to get the BSDs to work, that is to allocate memory from some predefined address in the lower 2Gb of the address space. BTW: are there any big plans[tm] to replace all this OS specific hacks by something like dlopen(3) and friends for GHC 6.12? I really don't want to hack on Linker.c just to be see that hacking obsoleted in a year ;-) I don't think we can do it all with dlopen(). Even when we have shared libraries for all the packages and the RTS, so we could use dlopen() to load those, we still need to be able to link plain .o files. I suppose dlopen() migth be able to load ordinary .o files these days, but it probably won't populate its symbol table with symbols from the .o unless it was linked with --export-dynamic or something. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: simple questions about +RTS -s output
Peter Hercek wrote: Hi, There is an example of +RTS -s output at the end. I have few simple questions: What does "9 Mb total memory in use" mean? Is it in mega bytes (MB) or in mega bits (Mb)? I would expect memory usage to be in bytes (B) but a unit for bits (b) seems to be used. Looks like heap size in mega bytes... It's mega bytes (MB). I fixed the abbreviation a while ago, it'll be correct in 6.10.1. Also there are some new stats about the amount of memory being wasted due to fragmentation in the memory allocator. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: -optl-s (strip) by default
Bulat Ziganshin wrote: Hello Don, Friday, September 12, 2008, 12:54:22 PM, you wrote: when GHC builds executable, it adds debug info by default. since this You can also achieve this by making sure your deployed programs build with Cabal, http://www.haskell.org/pipermail/cabal-devel/2008-March/002427.html No need to hack GHC's view of the linker. btw, it seems like hack in 3rd party tool to fix weird default ghc behavior It might be weird, but it's traditional. Also the symbol information is sometimes useful: for example on Linux Valgrind understands it and can tell you which function has memory errors, or give you a profile (perhaps only useful for people working on the RTS like me). I don't feel that strongly about it though. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: testing ghc-6.10-candidate
Serge D. Mechveliani wrote: 5. The same is for -O2. But this does not gain more performance. -O2 -fvia-C also does not gain more performance, but leads to 7 times longer compilation. Just to be sure: are you saying that with -O2 the compile time is ok, but when you add -fvia-C it takes 7 times longer? If so, it's probably not a bug (we don't have any control over the speed of gcc). Here's the bug report: http://hackage.haskell.org/trac/ghc/ticket/2609 Sergey, could you clarify please? Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Instances and DoCon
Serge D. Mechveliani wrote: On Fri, Sep 19, 2008 at 08:17:12PM +0100, Ian Lynagh wrote: On Tue, Sep 16, 2008 at 10:44:53AM +0100, Simon Peyton-Jones wrote: | | And still ghc-6.8.3 builds itself from source. I have no idea how -- Happy has been needed for some time. Maybe someone else does. It's not meant to be needed for building from a source tarball. This should be fixed now. Dear GHC team, can you, please, provide such GHC source distributives which require possibly smaller set of programs for the user to separately pre-install? For example, do not require of the user to install Alex, Happy and Cabal. GHC source distributions do not require Alex, Happy or Cabal in order to build. If you find one of these is a dependency, then it is a bug - please report it. Note that when building from the development sources (darcs repository), you *do* need more dependencies, but a source distribution is different. Also, as Jason said - this is normally not an issue as few people need to build GHC from source in order to just use it. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: ANNOUNCE: protocol-buffers-0.2.9 for Haskell is ready
Chris Kuklewicz wrote: I am cross-posting this message to several lists. I had learned the trick before the documentation was updated. It seems I have used a very unreliable trick. And the "use castToSTUArray" suggested alternative is a really poor one since I am not using arrays at all. castSTUArray is the way GHC does it - the idea is to allocate a small array, store the Float/Double in it, cast the type of the array to Word32 or whatever, and then read out the contents. It's more or less equivalent to the peek/poke solution, except that it doesn't need unsafePerformIO. GHC's code is here: (look for floatToWord): http://darcs.haskell.org/ghc/compiler/cmm/PprC.hs Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: ANNOUNCE: GHC 6.10.1 beta
Ian Lynagh wrote: On Tue, Sep 23, 2008 at 12:43:53AM -0600, humasect wrote: installPackage: internal error: stg_ap_ppp_ret (GHC version 6.8.3 for i386_apple_darwin) Please report this as a GHC bug: http://www.haskell.org/ghc/reportabug make[2]: *** [build.stage.1] Abort trap make[1]: *** [build.stage.1] Error 2 make: *** [stage1] Error 1 This is a bug in GHC 6.8.3, so isn't something we can fix (unless it also happens with 6.10). Unfortunately it means you can't build 6.10, but you will be able to install a binary distribution or OS X installer. Ian - do you happen to know which bug this is? Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Is -fvia-C still needed?
Jason Dusek wrote: Without GCC, how would we compile C extensions? I'm not sure what you mean. To answer the original question: yes, -fvia-C is almost redundant. We took some steps in 6.10.1 to make -fvia-C behave in a way more consistent with -fasm (that I still need to document properly!), so -fvia-C no longer includes any header files when compiling the generated C code. If you were using -fvia-C to call C functions defined as CPP macros via the FFI, then you can't do that any more in 6.10.1. You have to write a C wrapper function and call that instead. I think -fvia-C generates slightly faster code in some cases, but it might also generate slower code sometimes. FWIW, we still use it for our binary distributions, but I haven't measured the difference it makes, if any. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Is -fvia-C still needed?
leledumbo wrote: The reason why I don't like gcc shipped with ghc distribution is that I waste my harddisk space for two same compilers, only differ in version (I use 4.3.2, ghc ships 3.4.2). And because both are in my PATH, it sometimes behaves unexpectedly. For instance, I try to delete the shipped gcc because I think ghc will be able to find mine (hmm...I suggest ghc implements this, especially for people with low harddisk space). But when I try to compile, it complains about not finding gcc. Crazily, upon compiling with -fvia-C it runs MY gcc, not the shipped one (gcc bindir is placed before ghc bindir in my PATH)! This also happens when I try to bootstrap from source. Second reason, if ghc already performs very well using its ncg, I think it's better to improve it rather than relying on gcc's one. If GHC isn't using its own gcc, that's a bug. We could theoretically ship a cut-down GHC bundle with no gcc, but as others have said it's often useful. For example, hsc2hs needs it, and you'll need a gcc if you write 'foreign export' or 'foreign import "wrapper"' anywhere in your source code. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: ANNOUNCE: GHC 6.10.1 beta
Matthias Kilian wrote: Hi, On Mon, Sep 22, 2008 at 01:35:43AM +0100, Ian Lynagh wrote: We are pleased to announce that the GHC 6.10.0.20080921 snapshot is a beta release of GHC 6.10.1. [...] Please test as much as possible; bugs are much cheaper if we find them before the release! Not quite the beta snapshot, but from HEAD about the same time you tagged and branched, I got the following positive results on OpenBSD/i386: 1) Build it up to stage2 and make install from ghc-6.6: ok 2) The same again with the result of 1): ok 3) Full build including all extralibs with the result of 2): ok I'll restart the whole process with what's in the ghc-6.10 branch now (hopefully that's ok for you, since some fixes have been already committed to the branch), and I'll also try to run the testsuite after step 3). BTW: I had some problems running the testsuite some weeks ago, because some autoconf'd stuff was missing. When building from the repository (*not* from the source tarballs), are there any additional steps beyond sh boot ./configure --prefix=... gmake install required? No, that should be enough. Thanks for testing it! Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: ghc-path
Claus Reinke wrote: Thank you all. Everything is great now. It was because (Just ghcPath) was not passed into runGhc, and lack of ghc-path. btw: is ghc-path going to be part of the ghc release or the haskell platform? Currently, I can't see it in either, which would be a pity. I thought the idea was to provide it in the ghc release, but not install it (since cabal install isn't going to be part of the ghc release after all). I don't think we use ghc-paths in GHC's Haddock now (Ian - correct me if I'm wrong). And it's hard to provide ghc-paths with a GHC installation, because it would have to be built at install-time, or at least binary-patched. So I think we decided it was easiest to just let the user install it with cabal-install later. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: planning for ghc-6.10.1 and hackage [or: combining packages to yield new type correct programs]
Don Stewart wrote: Here's a summary of why this is non-trivial, * We're trying to compose packages on the users machine to yield new type correct programs. * We're using cabal dependencies to decide when it is safe to do this. Hopefully we don't rule in any type incorrect combinations, nor rule out to many type correct combinations. * This is scary - in fact, we think the package system admits type incorrect programs (based on Typeable, or initialised global state), as it is similar to the runtime linking problem for modules. I think what you're referring to is the problem that occurs if the program links in more than one copy of Data.Typeable, which would then invalidate the assumptions that make Data.Typeable's use of unsafeCoerce safe. I wouldn't call this "type-incorrect" - it's a violation of assumptions made by Data.Typeable, not type-incorrectness in the Haskell sense. But you'll be glad to know this doesn't happen anyway, because Data.Typeable's state is held by the RTS these days, for exactly this reason. However, there are libraries which do have private state (e.g. System.Random). We'd prefer not to have more than one copy of the state, but it's not usually fatal: in the case of System.Random, different clients might get streams of random numbers initialised from different seeds, but that's indistinguishable from sharing a single stream of random numbers. Often this global-state stuff is for caching, which works just fine when multiple clients use different versions of the library - it's just a bit less efficient. * We use constraint solving determine when composition is safe, by looking at "package >= 3 && < 4" style constraints. That is, we try to guess when the composition would yield a type correct program. The way to make this completely safe is to ensure that the resulting program only has one of each module, rather than one of each package version - that's an approximation, because the package name might have changed too. Now, we want to relax this in various ways. One way is the base-3/base-4 situation, where base-3 has a lot of the same modules as base-4, but all they do is re-export stuff from other packages. How do we know this is safe? Well, we don't - the only way is to check whether the resulting program typechecks. Another way we want to relax is it when a dependency is "private" to a package; that is, the package API is completely independent of the dependency, and hence changing the dependency cannot cause compilation failure elsewhere. We've talked in the past about how it would be nice to distinguish private from non-private dependencies. Let's be clear: there are only two ways that something could "go wrong" when composing packages: 1. the composition is not type-correct; you get a compile-time error 2. some top-level state is duplicated; if the programmer has been careful in their use of unsafePerformIO, then typically this won't lead to a run-time error. So it's highly unlikely you end up with a program that goes wrong at runtime, and in those cases arguably the library developer has made incorrect assumptions about unsafePerformIO. * Again, we're using constraint solving on this language to determine when composition of Haskell module sets (aka packages) would yield type correct Haskell programs All without attempting to do type checking of the interfaces between packages -- the very thing that says whether this is sound! True - but we already know that package/version pairs are a proxy for interfaces, and subject to user failure. If the package says that it compiles against a given package/version pair, there's no guarantee that it actually does, that's up to the package author to ensure. Now obviously we'd like something more robust here, but that's a separate problem - not an unimportant one, but separate from the issue of how to make cabal-install work with GHC 6.10.1. cabal-install has to start from the assumption that all the dependencies are correct. Then it can safely construct a complete program by combining all the constraints, and additionally ensuring that the combination has no more than one of each module (and possibly relaxing this restriction when we know it is safe to do so). * So, the solver for cabal-install has to be updated to allow the same package to have multiple, conflicting versions, as long as version X depends on version Y, and then not reject programs that produce this constraint. Right. * This is non trivial, but think this refactoring is possible, but it is hard. ultimately we're still making optimistic assumptions about when module sets can be combined to produce type correct programs, and conservative assumptions, at the same time. What we need is a semantics for packages, that in turn uses a semantics for modules, that explains interfaces in terms of types. The semantics is quite straigh
Re: planning for ghc-6.10.1 and hackage [or: combining packages to yield new type correct programs]
Simon Marlow wrote: So I'm not sure exactly how cabal-install works now, but I imagine you could search for a solution with a backtracking algorithm, and prune solutions that involve multiple versions of the same package, unless those two versions are allowed to co-exist (e.g. base-3/base-4). If backtracking turns out to be too expensive, then maybe more heavyweight constraint-solving would be needed, but I'd try the simple way first. Attached is a simple backtracking solver. It doesn't do everything you want, e.g. it doesn't distinguish between installed and uninstalled packages, and it doesn't figure out for itself which versions are allowed together (you have to tell it), but I think it's a good start. It would be interetsing to populate the database with a more realistic collection of packages and try out some complicated install plans. Cheers, Simon module Main(main) where import Data.List import Data.Function import Prelude hiding (EQ) type Package = String type Version = Int type PackageId = (Package,Version) data Constraint = EQ Version | GE Version | LE Version deriving (Eq,Ord,Show) satisfies :: Version -> Constraint -> Bool satisfies v (EQ v') = v == v' satisfies v (GE v') = v >= v' satisfies v (LE v') = v <= v' allowedWith :: PackageId -> PackageId -> Bool allowedWith (p,v1) (q,v2) = p /= q || v1 == v2 || multipleVersionsAllowed p type Dep = (Package, Constraint) depsOf :: PackageId -> [Dep] depsOf pid = head [ deps | (pid',deps) <- packageDB, pid == pid' ] packageIds :: Package -> [PackageId] packageIds pkg = [ pid | (pid@(n,v),_) <- packageDB, n == pkg ] satisfy :: Dep -> [PackageId] satisfy (target,constraint) = [ pid | pid@(_,v) <- packageIds target, v `satisfies` constraint ] -- | solve takes a list of dependencies to resolve, and a list of -- packages we have decided on already, and returns a list of -- solutions. -- solve :: [Dep] -> [PackageId] -> [[PackageId]] solve [] sofar = [sofar] -- no more deps: we win solve (dep:deps) sofar = [ solution | pid <- satisfy dep, pid `consistentWith` sofar, solution <- solve (depsOf pid ++ deps) (pid:sofar) ] consistentWith :: PackageId -> [PackageId] -> Bool consistentWith pid = all (pid `allowedWith`) plan :: Package -> [[PackageId]] plan p = pretty $ solve [(p,GE 0)] [] pretty = nub . map (nub.sort) main = do print $ plan "p" print $ plan "yi" -- - -- Data packageDB :: [(PackageId, [Dep])] packageDB = [ (("base",3), []), (("base",4), []), (("p", 1), [("base", LE 4), ("base", GE 3), ("q", GE 1)]), (("q", 1), [("base", LE 3)]), (("bytestring",1), [("base", EQ 4)]), -- installed (("bytestring",2), [("base", EQ 4)]), -- installed (("ghc", 1), [("bytestring", EQ 1)]), -- installed (("ghc", 2), [("bytestring", GE 2)]), (("yi", 1), [("ghc", GE 1), ("bytestring", GE 2)]) ] multipleVersionsAllowed :: Package -> Bool multipleVersionsAllowed "base" = True -- approximation, of course multipleVersionsAllowed _ = False ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: planning for ghc-6.10.1 and hackage
Duncan Coutts wrote: I propose two solutions: * Fix the dependency resolver * Add support in Cabal and Hackage for suggested version constraints Simon PJ just came up with a suggestion for the second part. The idea is this: If we see a dependency like "base >= 3" with no upper limit, we should satisfy it with base-3 in preference to base-4, on the grounds that the package is much more likely to build with base-3. This seems to be a solution that works without any magic shims or "preference files" or anything else. Perhaps we could even go as far as saying "base >= 3.0" is equivalent to "base == 3.0.*". i.e. if you don't supply an upper bound, then we'll give you a conservative one. I wonder how much stuff would break if we did that. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: could not find link destinations
Wolfgang Jeltsch wrote: Hello, I built GHC 6.10.0.20080927 on Debian GNU/Linux for i386. At the end of the build process I got numerous warnings of the form “could not find link destinations for: […]”. Is this intensional? These warnings are from Haddock, and are emitted when it can't decide how to hyperlink a particular identifier. Sometimes they are cause for concern, and we should really take a look through them before the release. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: ghc-6.10.0.20081005 binary refers to (non existent?) libedit.so.0
Johannes Waldmann wrote: Dear all, I tried to install the binary snapshot on Debian (etch) x86_64 and got: /usr/local/lib/ghc-6.10.0.20081005/ghc: error while loading shared libraries: libedit.so.0: cannot open shared object file: No such file or directory I do have libedit (I think): lrwxrwxrwx 1 root root 14 2007-11-30 16:01 /usr/lib/libedit.so.2 -> libedit.so.2.9 -rw-r--r-- 1 root root 140640 2006-03-19 15:26 /usr/lib/libedit.so.2.9 This is the right package? dpkg -l "libedit*" ... ii libedit2 2.9.cvs.20050518 BSD editline and history libraries For some reason it seems that Fedora and Debian are using different versions of the libedit shared library. Our binary installers are build on Fedora. For the release we'll probably have two binary distributions, one for Fedora and one for Debian(-based) systems. You might be able to get away with symlinking libedit.so.0 to libedit.so.2 in the meantime. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: System.Process.runInteractiveCommand, exit_group ()
Johannes Waldmann wrote: Solved - exit_group() wasn't the problem. My wrapper program silently died from SIGPIPE. This is something we changed in GHC 6.10.1, incedentally. Now SIGPIPE doesn't silently exit the program, and it will get an exception instead. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Control.Exception
Johannes Waldmann wrote: with 6.10, the following does not typecheck: foo `Control.Exception.catch` \ _ -> return bar Ambiguous type variable `e' in the constraint: `Control.Exception.Exception e' It is probably bad programming style anyway but what is the workaround? As long as you're aware that it is bad programming style. We deliberately didn't include an easy way to do this, because we want people to think about why they need to catch *all* exceptions (most of the time it's a bug). Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: thread/socket behvior
Jeff Polakow wrote: Don Stewart <[EMAIL PROTECTED]> wrote on 10/09/2008 02:56:02 PM: > jeff.polakow: > >We have a server that accepts messages over a socket, spawning threads to > >process them. Processing these messages may cause other, outgoing > >connections, to be spawned. Under sufficient load, the main server loop > >(i.e. the call to accept, followed by a forkIO), becomes nonresponsive. > > > >A smaller distilled testcase reveals that when sufficient socket activity > >is occurring, an incoming connection may not be responded to until other > >connections have been cleared out of the way, despite the fact that these > >other connections are being handled by separate threads. One issue that > >we've been trying to figure out is where this behavior arises from-- the > >GHC rts, the Network library, the underlying C libraries. > > > >Have other GHC users doing applications with large amounts of > socket usage > >observed similar behavior and managed to trace back where it originates > >from? Are there any particular architectural solutions that people have > >found to work well for these situations? > > Hey Jeff, > > Can you say which GHC you used, and whether you used the threaded > runtime or non-threaded runtime? > Oops, forgot about that... We used both ghc-6.8.3 and ghc-6.10.rc1 and we used the threaded runtime. We are running on a 64 bit linux machine using openSUSE 10. The scheduler doesn't have a concept of priorities, so the accepting thread will get the same share of the CPU as the other threads. Another issue is that the accepting thread has to be woken up by the IO manager thread when a new connection is available, so we might have to wait for the IO manager thread to run too. But I wouldn't expect to see overly long delays. Maybe you could try network-alt which does its own IO multiplexing. If you have multiple cores, you might want to try fixing the thread affinity - e.g. put all the worker threads on one core, and the accepting thread on the other core. You can do this using GHC.Conc.forkOnIO, with the +RTS -qm -qw options. Other than that, I'm not sure what to try right now. We're hoping to get some better profiling for parallel/concurrent programs in the future, but it's not ready yet. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: ANNOUNCE: GHC 6.10.1 RC 1
Judah Jacobson wrote: Once small thing I've noticed: UserInterrupt (ctr-c) exceptions are not thrown in ghci, probably because it installs its own signal handlers: Prelude Control.Exception Control.Concurrent> handle (\UserInterrupt -> putStrLn "Caught!") (threadDelay 200) ^CInterrupted. For consistency between the compiled and interpreted environments, it would be nice if the above could catch the ctrl-c. But maybe there's a reason not to do this? If this change sounds OK, I can take a look at this and try to put together a patch over the weekend. Hmm, tricky one. I agree with the argument for consistency, but on the other hand you might also want a way to interrupt a computation regardless, and that almost works - as long as the program isn't discarding exceptions it knows nothing about. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: breakage with Cabal-1.6
Duncan Coutts wrote: * alex-2.2 * happy-1.17 Imports buildVerbose from Distribution.Simple.Setup ( BuildFlags(..) ) however the flag has been renamed to buildVerbosity and with a different type. I would export a compat function but it would not help here since the Setup script expects it to be a record selector from BuildFlags. Cabal-1.4 contained a dodgy hack to make this continue to work, but I'm not doing that again. If I added a compat function then we could at least make it work with both ghc/cabal versions with a single implementation. So I might do that and do point releases of these packages, if Simon thinks that's ok. Really of course both packages should stop calling an external perl and other similar madness. I can stop calling Perl, but I still need to run CPP, so I still need the runProgram stuff, which means I still need buildVerbose/buildVerbosity. As far as I can see, you could export a compatibility shim called buildVerbose without any difficulty, all I have to do is remove the explicit import list. Or is there a better fix you had in mind? Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: breakage with Cabal-1.6
Duncan Coutts wrote: On Fri, 2008-10-10 at 16:13 +0100, Simon Marlow wrote: As far as I can see, you could export a compatibility shim called buildVerbose without any difficulty, Done. all I have to do is remove the explicit import list. Or is there a better fix you had in mind? Patches for alex and happy attached. Their Setup.lhs now works with 1.2, 1.4 and 1.6 (1.6.0.1). Thanks! New versions of Alex & Happy uploaded. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Breakage with 6.10
Ian Lynagh wrote: On Fri, Oct 10, 2008 at 03:54:07PM -0700, Duncan Coutts wrote: On Fri, 2008-10-10 at 15:34 -0700, Don Stewart wrote: arrows fails due to: [ 3 of 12] Compiling Control.Arrow.Transformer.CoState ( Control/Arrow/Transformer/CoState.hs, dist/build/Control/Arrow/Transformer/CoState.o ) Control/Arrow/Transformer/CoState.hs:24:29: Module `Control.Arrow' does not export `pure' Even though cabal-install decided to use base-3.0.3.0 So that means base-3 is *not* exporting quite the same interface as last time. Were we aware of this? Note that this means that base-3 should have the version number 3.1.x.y because there is at least one incompatible api change. We're promoting the versioning policy so we need to follow it ourselves in our base libs. I don't think that that really helps. If you're going to depend on base 3.1, you might as well just depend on base 4 and be more future-proof. The base-compat package needs to claim to have the same API as the old base, because the point is that things just keep on working (except in the few cases, like the Arrow split, where that isn't possible). Hmm, this is quite annoying. We simply *can't* provide the same API as base-3.0.2 without defining a new Arrow class, and that would kill compatibility with base-4. On the other hand, we did know this could happen (changes to datatypes also cause the same problem), it's the tradeoff with trying to provide base-3 as a compatibility layer over base-4. So the options are: * if we are honest and call it base-3.1, then everyone else has to lie and use dependencies like base==4.*. If they use dependencies like base==4.0, which are more correct, then these packages will break in GHC 6.12 if we have to ship base-4.1 rather than 4.0. * we lie (a bit) and call it base-3.0.3. so the total amount of dishonesty is reduced if we pick the second option :-) Perhaps we've found a use for the second digit of the major version number. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: 2008-10-12 Hackage status with GHC 6.10 release candidate
Don Stewart wrote: Note that these builds are with "soft deps", provided on hackage, base < 4 parsec < 3 HaXml == 1.13.* QuickCheck < 2 which train cabal-install to build a larger set of packages. Will this happen automatically somehow, or will users have to do this manually? The important result: *46 packages produce different results to ghc 6.8.2* These packages and their logs are listed below. If you maintain one of the following packages, and are able to fix it before GHC 6.10 is released, your users will be happy. The most common issues for these differences are, * Changes to Arrow class definition * Changes to types of Map and Set functions It would be feasible to provide a containers-0.1, if anyone thinks that's worthwhile. * Cabal changes * Changes to ghc-api * Changes to when 'forall' is parsed (add Rank2Types) * GHC.Prim was moved, Nobody should be importing GHC.Prim, use GHC.Exts instead. * Changes to -fvia-C and headers I wrote a more detailed entry for the release notes about this: http://www.haskell.org/ghc/dist/stable/docs/users_guide/release-6-10-1.html ("FFI change" under "User-visible compiler changes") Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: A wiki page for managing the 6.10 handover
Don Stewart wrote: http://haskell.org/haskellwiki/Upgrading_packages#Typical_breakages_with_GHC_6.10 It collects the 7 or so known issues that break code with GHC 6.10. Please feel free to clean up, and especially *add techniques for handling each change*. If we do this right, with cabal-install being smart, clear summaries of what is broken and how to fix it, this might be the smoothest major release yet. This is wonderful. I've been making noises about trying to make the transition smoother this time, but you and Duncan have done most of the hard work. Not only that, but the process has turned up some bugs that hopefully we'll be able to fix in time for the release. Great stuff, thanks guys! Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: breakage with Cabal-1.6
Bryan O'Sullivan wrote: On Mon, Oct 13, 2008 at 1:58 AM, Simon Marlow <[EMAIL PROTECTED]> wrote: Thanks! New versions of Alex & Happy uploaded. Where to? I only see 2.3 on Hackage, and haskell.org claims that Alex is still at 2.2. Just on Hackage for the time being, I'll update the web pages after a bit of testing. There's already been a fix to Happy since I uploaded 1.18. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Strictness in data declaration not matched in assembler?
Lennart Augustsson wrote: True, if there can be indirections then that's bad news. That would make strict fields much less efficient. But I don't see why indirections should be needed. Simon? This kind of thing has always been a problem for GHC, and IIRC hbc does/did better here. I don't know for sure whether we guarantee not to point to an indirection from a strict constructur field. I imagine it wouldn't be hard to arrange, but it is another invariant we'd have to maintain throughout the Core->Core phases. The real problem is that strictness is not represented in the type system, so we have no way to check that these kind of invariants are being respected. Cheers, Simon On Wed, Oct 15, 2008 at 4:21 PM, Jan-Willem Maessen <[EMAIL PROTECTED]> wrote: On Oct 15, 2008, at 11:08 AM, Lennart Augustsson wrote: I totally agree. Getting the value of the field should just evaluate x and then use a pointer indirection; there should be no conditional jumps involved in getting the value. GHC is just doing the wrong thing. Can indirection nodes occur in this context? [I'd think not, but it depends on what pointer we're storing when we force the thunk in the constructor.] I could see the need for the test if indirection handling is required. -Jan -- Lennart On Wed, Oct 15, 2008 at 3:58 PM, Tyson Whitehead <[EMAIL PROTECTED]> wrote: On Wednesday 15 October 2008 10:48:26 you wrote: Strictness does not imply unboxing. To see why not, think about the fact that unboxing breaks sharing. By keeping the pointer-indirection in place, we can share even strict fields between related values. I believe I realize that. What I was wondering about was the fact that it seemed to think the pointer might be to a thunk (instead of constructor closure). Doesn't the strictness flag mean the following assembler would work sni_info: movq 7(%rbx),%rbx movq $snj_info,(%rbp) jmp snj_info (which could be cleaned up further by combining it with snj_info) instead of sni_info: movq 7(%rbx),%rbx movq $snj_info,(%rbp) testq $7,%rbx jne snj_info jmp *(%rbx) (i.e., the whole test if it is a thunk and conditionally evaluate it bit is unnecessary due to constructor the strictness flag). Cheers! -Tyson ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: GHC 6.10 confusion
This should actually be fixed in more recent snapshots. If it isn't, please let me know. Cheers, Simon J. Garrett Morris wrote: My hero! This resolved my problem. /g On Fri, Oct 17, 2008 at 8:38 AM, Mitchell, Neil <[EMAIL PROTECTED]> wrote: Hi See: http://hackage.haskell.org/trac/ghc/ticket/2585 The solution is to grab a version of GHC 6.8 and copy its windres.exe into the GHC 6.10 bin directory. This will be fixed before release. Thanks Neil -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of J. Garrett Morris Sent: 17 October 2008 4:21 pm To: GHC users Subject: GHC 6.10 confusion Hello everyone, I've been attempting to build darcs under ghc-6.10.0.20080920. Currently, I'm getting the following error: ghc failed with: D:\Programs32\GHC\ghc-6.10.0.20080920\bin/windres: CreateProcess (null): No error Does anyone recognize this? Any pointers for where I should be looking? This is on Vista 64-bit SP1. /g -- I am in here ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users == Please access the attached hyperlink for an important electronic communications disclaimer: http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html == ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: thread/socket behvior
I'll be interested to know if the fix helps your application. The bug reported in #2703 results in the program just allocating memory endlessly until it dies, so it doesn't sound exactly like the symptoms you were originally describing. Cheers, Simon Jeff Polakow wrote: Hello, Just writing to let people know the resolution of this problem... After much frustration and toil, we realized there was a bug in GHC's handle abstraction over sockets. We resolved our immediate problem by having our code deal directly with the sockets, and we filed a bug report, #2703, which has just been (partially fixed) by Simon Marlow. thanks, Jeff Simon Marlow <[EMAIL PROTECTED]> wrote on 10/10/2008 09:23:31 AM: > Jeff Polakow wrote: > > > Don Stewart <[EMAIL PROTECTED]> wrote on 10/09/2008 02:56:02 PM: > > > > > jeff.polakow: > > > >We have a server that accepts messages over a socket, spawning > > threads to > > > >process them. Processing these messages may cause other, outgoing > > > >connections, to be spawned. Under sufficient load, the main > > server loop > > > >(i.e. the call to accept, followed by a forkIO), becomes > > nonresponsive. > > > > > > > >A smaller distilled testcase reveals that when sufficient socket > > activity > > > >is occurring, an incoming connection may not be responded to > > until other > > > >connections have been cleared out of the way, despite the fact > > that these > > > >other connections are being handled by separate threads. One > > issue that > > > >we've been trying to figure out is where this behavior arises > > from-- the > > > >GHC rts, the Network library, the underlying C libraries. > > > > > > > >Have other GHC users doing applications with large amounts of > > > socket usage > > > >observed similar behavior and managed to trace back where it > > originates > > > >from? Are there any particular architectural solutions that > > people have > > > >found to work well for these situations? > > > > > > Hey Jeff, > > > > > > Can you say which GHC you used, and whether you used the threaded > > > runtime or non-threaded runtime? > > > > > Oops, forgot about that... > > > > We used both ghc-6.8.3 and ghc-6.10.rc1 and we used the threaded > > runtime. We are running on a 64 bit linux machine using openSUSE 10. > > The scheduler doesn't have a concept of priorities, so the accepting thread > will get the same share of the CPU as the other threads. Another issue is > that the accepting thread has to be woken up by the IO manager thread when > a new connection is available, so we might have to wait for the IO manager > thread to run too. But I wouldn't expect to see overly long delays. Maybe > you could try network-alt which does its own IO multiplexing. > > If you have multiple cores, you might want to try fixing the thread > affinity - e.g. put all the worker threads on one core, and the accepting > thread on the other core. You can do this using GHC.Conc.forkOnIO, with > the +RTS -qm -qw options. > > Other than that, I'm not sure what to try right now. We're hoping to get > some better profiling for parallel/concurrent programs in the future, but > it's not ready yet. > > Cheers, >Simon --- This e-mail may contain confidential and/or privileged information. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and destroy this e-mail. Any unauthorized copying, disclosure or distribution of the material in this e-mail is strictly forbidden. ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: cabal
Claus Reinke wrote: The basic problem here is that the version number of the network package has not been bumped. .. .. Of course that's not true here because the package has changed without the version being bumped. .. Indeed the only reason it's trying to rebuild it at all is because the installed version has different deps from the available version, again due to the fact that it changed without changing version number. So the solution is for the updated network package to have its version bumped and for it to be released. For ghc at least, couldn't cabal grep the hashes from the output of find dist/ -name '*.hi' | xargs ghc --show-iface and associate the collection of hashes for the exposed-modules and cross-package imports with the version number, keeping a history of these associations? cabal tag-current would try to add the current version number with the current hashes, complaining if the number already exists with different hashes cabal sdist (and other distribution channels) would check that the current version number is in the history with the current hashes, and complain otherwise Distributing packages without version number checks could result in in "unverified" packages, so users would know that the dependencies and version number haven't been checked (successful checks could create a package signature based on .cabal+.history, or on the whole package contents). Or are Ghc's new hashes non-portable/too specific? GHC's hashes aren't suitable for this (yet). We do not hash the API, but rather the ABI, and the ABI is often not stable - re-compiling can give you a different ABI, as internal names change and things move around in unpredictable ways. However I do think we should have a way to get a dump of the API. We've talked in the past about having some kind of API tool that would compare APIs and show you the differences (built on the GHC API of course). This would make a nice little project for someone... Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: could ghci debugger search for free variables better?
Peter Hercek wrote: May be my approach to debugging with ghci is wrong but in about half of the time I find ghci (as a debugger) almost useless. The reason is the limited way it can resolve identifiers. I can examine the free variables in the selected expression and nothing else. Well, I *think* just sometimes I can examine few more variables. But if it happens at all it is rare. Is there a way to make ghci recognize all the variables which could be visible in the selected expression? By "could be visible" I mean they are in scope and can be used in the expression if I would edit the source code. We thought about this when working on the debugger, and the problem is that to make the debugger retain all the variables that are in scope rather than just free in the expression adds a lot of overhead, and it fundamentally changes the structure of the generated code: everything becomes recursive, for one thing. Well, perhaps you could omit all the recursive references (except the ones that are also free?), but there would still be a lot of overhead due to having to retain all those extra references. It also risks creating serious space leaks, by retaining references to things that the program would normally discard. Fortunately it's usually easy to work around the limitation, just by adding extra references to your code, e.g. in a let expression that isn't used. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Control.Exception
Jason Dagit wrote: On Wed, Oct 8, 2008 at 1:19 AM, Simon Marlow <[EMAIL PROTECTED]> wrote: Johannes Waldmann wrote: with 6.10, the following does not typecheck: foo `Control.Exception.catch` \ _ -> return bar Ambiguous type variable `e' in the constraint: `Control.Exception.Exception e' It is probably bad programming style anyway but what is the workaround? As long as you're aware that it is bad programming style. We deliberately didn't include an easy way to do this, because we want people to think about why they need to catch *all* exceptions (most of the time it's a bug). Since the above is bad form, what should I be doing? Could someone please provide some examples or point me at the list of exceptions that I can catch? What about catching multiple types of exceptions? Let's distinguish two kinds of exception handling: 1. Cleaning up. If you want to catch errors in order to clean up - release resources, remove temporary files, that sort of thing - then use bracket or finally. Behind the scenes, these catch all exceptions, but crucially they re-throw the exception after cleaning up, and they do the right block/unblock stuff for asynchronous exceptions. 2. Recovery. You want to catch certain kinds of exception in order to recover and do something else, e.g. when calling getEnv. In that case, I recommend using try or tryJust. tryJust (guard . isDoesNotExistError) $ getEnv "HOME" it's good practice to separate the filter (the kinds of exception you're catching) from the code to handle them, and that's what tryJust does. There's some subtelty here to do with whether you need to be in "blocked" mode to handle the exception or not: if you're handling an exception you expect to be thrown asynchronously, then you probably want to use catch instead of try, because then the handler will run in blocked mode. But be careful not to tail-call out of the handler, because then the thread will stay in blocked mode, which will lead to strange problems later. A bit more background is here: http://hackage.haskell.org/trac/ghc/ticket/2558 (hmm, perhaps exception handlers should be STM transactions. Then you wouldn't be able to accidentally tail-call out of the exception handler back into IO code, but you would be able to re-throw exceptions. Just a thought.) As for the kinds of exception you can catch, nowadays you can catch any type that is an instance of Exception. A good place to start is the list of instances of Exception in the docs: http://www.haskell.org/ghc/dist/stable/docs/libraries/base/Control-Exception.html#t%3AException although that only contains types defined by the base package. Others have commented on the backwards-compat issues, I don't have anything to add there. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: ANNOUNCE: GHC 6.10.1 RC 1
Paul Jarc wrote: Ian Lynagh <[EMAIL PROTECTED]> wrote: I thought all your problems boiled down to binaries not being able to find libgmp.so at runtime? So I think this should fix them all. Yes, but then I wouldn't be able to find and fix the commands that are missing SRC_HC_OPTS. :) So I'm holding off on that for now. Below is a patch for the ones I've found so far. With those changes, and without setting LD_LIBRARY_PATH, the build stops here: Technically speaking, we should pass $(HC_OPTS) to any Haskell compilations. That includes $(SRC_HC_OPTS), but also a bunch of other things. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: No atomic read on MVar?
Philip K.F. Hölzenspies wrote: I ran face first into an assumption I had made on MVar operations (in Control.Concurrent); I had assumed there to be an atomic read (i.e. non-destructive read, as opposed to destructive consume/take). The following program illustrates what I had in mind. testAtomic :: IO () testAtomic = do var <- newMVar 0 putStrLn("Fork") forkIO (putMVar var 1 >> putStrLn "X") yield r1 <- readMVar var putStrLn("1") r2 <- takeMVar var putStrLn("2") r3 <- takeMVar var putStrLn("Result: " ++ show [r1,r2,r3]) If readMVar had been atomic, the result would be program termination with a result of [0,0,1] being output. However, readMVar simply combines takeMVar and putMVar, so the reading of r1 blocks after the takeMVar, because upon taking the MVar, the blocked thread wakes up, puts 1 in var and prints X. readMVar does not terminate for r1 (i.e. "1" is never printed). I have now implemented my variable as a pair of MVars, one of which serves as a lock on the other. Both for performance reasons and for deadlock analysis, I would really like an atomic read on MVars, though. Does it exist? If not, why not? It would be slightly annoying to implement, because it needs changes in putMVar too: if there are blocked readMVars, then putMVar would have to wake them all up. Right now an MVar can only have one type of blocked thread attached to it at a time, either takeMVars or putMVars, and putMVar only has to wake a single thread. Perhaps you should be using STM? I suppose the answer to "why doesn't atomic readMVar exist" is that MVar is intended to be a basic low-level synchronisation abstraction, on which you can build larger abstractions (which you have indeed done). On other other hand, we're always interested in getting good value out of the building blocks, so when there are useful operations we can add without adding distributed complexity, that's often a good idea. I'm not sure that atomic readMVar falls into this category, though. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: could ghci debugger search for free variables better?
Peter Hercek wrote: Simon Marlow wrote: We thought about this when working on the debugger, and the problem is that to make the debugger retain all the variables that are in scope rather than just free in the expression adds a lot of overhead, and it fundamentally changes the structure of the generated code: everything becomes recursive, for one thing. Well, perhaps you could omit all the recursive references (except the ones that are also free?), but there would still be a lot of overhead due to having to retain all those extra references. It also risks creating serious space leaks, by retaining references to things that the program would normally discard. Fortunately it's usually easy to work around the limitation, just by adding extra references to your code, e.g. in a let expression that isn't used. Yes, Pepe pointed this to me too along with the "Step inside GHCi debugger" paper in monad reader. The problem is that I mostly can find out what is wrong when I look at values of some important variables when some important place in my code is hit. Using the trick with const function to manually add references is not that much better than simple "printf debugging" (adding Debug.Trace.trace calls to the code). Tracing the execution history is nice too but it provides much more than what is needed and obscures the important parts. OK, It is frustrating that I find "printf debugging" often more productive than ghci debugger. I see that it is not a good idea to keep references to all the variables in scope but maybe few improvements are possible: 1) As there is :steplocal, there should be also :tracelocal. It would keep history of evaluations within given function then when user asks for a variable it would be searched first in the selected expression and if not found in the expressions from the tracelocal history. If the result would be printed from tracelocal history it should be indicated so in the output. This would avoid the tedious task of searching the trace history manually and moreover it would limit the history to the interesting parts (so hopefully the depth of 50 would be enough). The results from the tracelocal history may not be from the expected scope sometimes but the same problem is with "printf debugging". Good suggestion - please submit it via the bugtracker, http://hackage.haskell.org/trac/ghc/newticket?type=feature+request 2) I noticed only now that I do not know how to script breakpoints. I tried :set stop if myFreeVar == 666 then :list else :continue ... and it did not work. My goal was to create a conditional breakpoint. I also wanted to use it instead of "printf debugging" using something like :set stop { :force myFreeVar; :continue } Ideally it should be possible to attach different script for each breakpoint and the functions for controlling debugger should be available in the Haskell. I would expect this is already partially possible now (using :set stop) and possibly some functions from ghci api which correspond to ghci commands (like :set etc.). But I do not know how, any pointers from experienced ghci debugger users? I think you want :cmd. e.g. :set stop :cmd if myFreeVar == 666 then return ":list" else return ":continue" Ghci debugger did not know some functions in my code which I would expect it to know; e.g. field selection functions from a record which is not exported from the module but which are available withing module. Is this expected? (I did not have any *.hi *.o files around when ghci did run the code.) It could be a bug, if you could figure out how to reproduce it and submit a bug report that would be great. Och and sometimes it did not recognize a free variable in the selected expression. The code looked like let myFn x = x `div` getDivisor state > 100 in if myFn xxx then ... the expression "myFn xxx" was selected while browsing trace history but xxx was not recognized, but when I browsed into myFn definition in the trace log the x (which represented the same value) was recognized. Is this expected? Again, please submit a bug report. The debugger is supposed to give you access to all of the free variables of the current expression. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Dilemma: DiffArray non-performance vs STArray non-readability
Claus Reinke wrote: I keep wanting to use DiffArray as the natural functional solution to single-threaded array use. But everytime I try, I get smacked over the head with the actual performance figures. Sometimes, even plain arrays are faster in a loop doing array updates, in spite of all the copying involved. And when copying on update dominates the runtime, using IntMap tends to be faster - the "indirections" are the wrong way round, but don't pile up, just that array lookups aren't quite constant time. But when I really need to avoid the updates, and need constant time lookup, I'm stuck: DiffArray tends to slow everything down (I vaguely recall locks and threads being at the heart of this, but I haven't checked the code recently), so my only option seems to be to transform my nice functional code into not-nice sequential code and use STArray. Is there any way out of this dilemma? What do other Ghc users use? If locks really are the issue, perhaps using STM instead of MVars in the DiffArray implementation could help. As long as my array uses are single-threaded, STM optimism might be able to avoid waiting/scheduler issues? Or am I on the wrong track? It needs to be thread-safe, but I imagine that using atomicModifyIORef rather than STM or MVars is the way to get good performance here. PS Btw, I thought the DiffArray performance issue was ancient, but I can't find a ticket for it, nor does the haddock page for Data.Array.Diff mention this little hickup. Should I add a ticket? I see there is one now, thanks. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: could ghci debugger search for free variables better?
Peter Hercek wrote: As for as the rest of the message. Those are possible bugs. If I can reduce them to few tens of lines of a test, I'll post the bug reports. I use Archlinux and the last (non-testing) version of ghc there is ghc-6.8.2. Do you accept bug reports against it or do you need them against 6.10.1rc1 only? Bug reports against 6.8.2 are fine, but if you can test against 6.10.1 that's even better (it might weed out bugs that have been already fixed and thus save us some time). Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Control.Exception
Jason Dagit wrote: On Mon, Nov 3, 2008 at 6:24 AM, Simon Marlow <[EMAIL PROTECTED]> wrote: Jason Dagit wrote: On Wed, Oct 8, 2008 at 1:19 AM, Simon Marlow <[EMAIL PROTECTED]> wrote: Johannes Waldmann wrote: with 6.10, the following does not typecheck: foo `Control.Exception.catch` \ _ -> return bar Ambiguous type variable `e' in the constraint: `Control.Exception.Exception e' It is probably bad programming style anyway but what is the workaround? As long as you're aware that it is bad programming style. We deliberately didn't include an easy way to do this, because we want people to think about why they need to catch *all* exceptions (most of the time it's a bug). Since the above is bad form, what should I be doing? Could someone please provide some examples or point me at the list of exceptions that I can catch? What about catching multiple types of exceptions? Let's distinguish two kinds of exception handling: Thanks. This helps a lot. Mind if I put it somewhere, such as on the wiki? A good description of how to deal with exceptions would be great to have in the Haddock documentation for Control.Exception - would you (or someone else) like to write and submit a patch? Or failing that, just putting it on the wiki would be useful too. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Control.Exception
Jason Dagit wrote: On Tue, Nov 4, 2008 at 2:47 AM, Simon Marlow <[EMAIL PROTECTED]> wrote: Jason Dagit wrote: Thanks. This helps a lot. Mind if I put it somewhere, such as on the wiki? A good description of how to deal with exceptions would be great to have in the Haddock documentation for Control.Exception - would you (or someone else) like to write and submit a patch? Or failing that, just putting it on the wiki would be useful too. I don't mind submitting a patch. What is the URL of the repo I should download? http://darcs.haskell.org/packages/base and the file is Control/Exception.hs. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
GHC blog
I finally got around to creating a GHC blog: http://ghcmutterings.wordpress.com/ This is for all things related to GHC, particularly people working on GHC to blog about what they're up to. If you want a write-bit, sign up for a wordpress account if you don't already have one (http://wordpress.com/), tell me your account name, and blog away! I've asked the planet.haskell.org admins to add us, so we should be on the planet soon. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: [Haskell-cafe] How to getCh on MS Windows command line?
Bulat Ziganshin wrote: > 흐壹o 鉅, > > 工賊嫂, 膠墮將奄 굅, 껐갭, 맏굡별9 죌, 闔u 破訂佯 Please stick to English on this list, thanks. Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: How to getCh on MS Windows command line?
Ahn, Ki Yung wrote: > What I mean by getCh is the non-buffered non-echoed version of getChar, > which Hugs used to provided as an extension but not any more. > > Is there any way to have a non-buffered non-echoed single character > input function on MS Windows command line using only the libraries in > the MS Windows distribution packages of either GHC or Hugs? > > The reason to why this is important for me is because I am translating > Graham Hutton's "Programming in Haskell" into Korean (soon to be > published), which illustrates interactive programming with the example > of a calculator that responds instantly for every keystroke of numbers > and arithmetic operations running on text mode. It is very important to > consider the readers who only have access to MS Windows systems, because > Korean OS market share is completely skewed towards MS Windows for very > embarrassing reasons that I do not even want to mention. And, isn't GHC > developed in MSR anyway? :-) > > > I remember that this is an old problem for both of the two most widely > used Haskell implementation, Hugs and GHC. > > In ghc 6.8 getChar had a bit strange behavior. As far as I remember it > always worked as if it were NoBuffering. Fortunately, in the recently > released ghc 6.10, this has been fixed. We now actually have to set the > buffering mode to NoBuffering with hSetBufferring to get the > non-buffered behavior of getChar. But, it still isn't working on MS > Windows. Here's the ticket: http://hackage.haskell.org/trac/ghc/ticket/2189 This needs somebody familiar with the intricacies of Windows Consoles to fix. Any takers? Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: how is Linux GHC formed?
Jason Dusek wrote: How was the Linux binary for GHC created? I am looking at ways to compile Haskell binaries for Linux that work across distros. So far, I have been using `-static -optl-static` but today there was a weird hiccup with IO -- the Gentoo built binary worked fine on Gentoo but caught SIGPIPE intermittently on Ubuntu -- and so it's back to the drawing board. The GHC linux binary seems to work on all the Linii without discrimination, so I'd like to know what the procedure is for producing it, and what I can take away from that to put in my Cabal file. We just do a normal build, on Fedora 9 boxen. If it works across other distros, it's probably just good luck! Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: GHC on powerpc (OS X): segmentation fault
Chris Kuklewicz wrote: /private/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_lang_ghc/work/ghc-6.10.1/ghc/stage1-inplace/ghc -H32m -O -I/opt/local/include -L/opt/local/lib -optc-O2 -I../includes -I. -Iparallel -Ism -DCOMPILING_RTS -package-name rts -static -I/opt/local/include -I../gmp/gmpbuild -I../libffi/build/include -I. -dcmm-lint -c Apply.cmm -o Apply.o make[1]: *** [Apply.o] Segmentation fault make: *** [stage1] Error 2 Does anyone have a clue how to proceed? Perhaps this is the same as http://hackage.haskell.org/trac/ghc/ticket/2380 but others have managed to compile GHC on PPC/OSX. In general we don't have anyone actively working on fixing bugs in the PPC port at the moment, I'd love it if someone could step up and help us keep this platform fully-supported. Here are the current PPC-specific bugs: http://hackage.haskell.org/trac/ghc/query?status=new&status=assigned&status=reopened&architecture=powerpc&order=priority Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: ANNOUNCE: GHC version 6.10.1
Magicloud wrote: Hi, It is a few days after the release. But why the darcs repos of 6.10 is still changing? The 6.10 repo is a branch. We tagged the 6.10.1 release, and we're now working towards a 6.10.2 release. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Linking error during stage2
dermiste wrote: Hi, I've successfully built GHC-6.10.1 from 6.6.1 on OpenBSD 4.4, and would like now to generate a hc-file-bundle to build it without pre-existing GHC. I followed the instructions in [1], but I'm stuck with this error : Bootstrapping from HC files isn't supported in 6.10.1, it was last supported in 6.6.1. We aim to have it working again for 6.12. Sorry for the inconvenience! Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Newbish building question
J. Garrett Morris wrote: Hello, I've been attempting to add some minor instrumentation to my pet copy of GHC 6.10.1. In particular, I'd like to add some code to extendInstEnv in compiler/types/InstEnv.lhs. First, I tried importing Debug.Trace into the InstEnv module, and changing the extendInstEnv function to trace "foobar". This built file, although was not tremendously useful. Next, I tried the following modification: extendInstEnv :: InstEnv -> Instance -> InstEnv extendInstEnv inst_env ins_item@(Instance { is_cls = cls_nm, is_tcs = mb_tcs }) = trace (showSDocDebug (ppr ins_item)) (addToUFM_C add inst_env cls_nm (ClsIE [ins_item] ins_tyvar)) where add (ClsIE cur_insts cur_tyvar) _ = ClsIE (ins_item : cur_insts) (ins_tyvar || cur_tyvar) ins_tyvar = not (any isJust mb_tcs) This produced the following error message: home/garrett/code/ghc-6.10.1/ghc/stage1-inplace/ghc -package-name base-4.0.0.0-hide-all-packages -no-user-package-conf -split-objs -i -idist/build -i. -idist/build/autogen -Idist/build/autogen -Idist/build -Iinclude -optP-include -optPdist/build/autogen/cabal_macros.h -#include "HsBase.h" -odir dist/build -hidir dist/build -stubdir dist/build -package ghc-prim-0.1.0.0 -package integer-0.1.0.0 -package rts-1.0 -O -package-name base -XMagicHash -XExistentialQuantification -XRank2Types -XScopedTypeVariables -XUnboxedTuples -XForeignFunctionInterface -XUnliftedFFITypes -XDeriveDataTypeable -XGeneralizedNewtypeDeriving -XFlexibleInstances -XPatternSignatures -XStandaloneDeriving -XPatternGuards -XEmptyDataDecls -XCPP -idist/build -H32m -O -O2 -Rghc-timing -XGenerics -Wall -fno-warn-deprecated-flags -c Data/Maybe.hs -o dist/build/Data/Maybe.o -ohi dist/build/Data/Maybe.hi dist/build/GHC/Base.hi Dict fun GHC.Base.$f9: Interface file inconsistency: home-package module `base:GHC.Base' is needed, but is not listed in the dependencies of the interfaces directly imported by the module being compiled Cannot continue after interface file error <> This is rather strange. It looks like an error when compiling Data.Maybe in the base package. I'm not sure why that would need to be recompiled at all, and the error itself is strange. Anyway, when modifying GHC you only need to say 'make' in compiler/, not at the top-level, that is unless you really want to recompile libraries, in which case you should probably 'make clean' in libraries/, then 'make boot' and 'make'. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: libedit.so.0
James Swaine wrote: Aha! That *seems* to have fixed the problem with libedit.so.0. Now ghc is complaining about something else: grep: packages: No such file or directory make -C libraries boot make[1]: Entering directory `/home/jswaine/ghc/ghc-6.10.1/libraries' mkdir bootstrapping mkdir: cannot create directory `bootstrapping': File exists make[1]: [cabal-bin] Error 1 (ignored) /home/jswaine/ghc/ghc-6.10.1/ghc/ghc -Wall -DCABAL_VERSION=1,6,0,1 -odir /home/jswaine/ghc/ghc-6.10.1/libraries/bootstrapping -hidir /home/jswaine/ghc/ghc-6.10.1/libraries/bootstrapping -i/home/jswaine/ghc/ghc-6.10.1/libraries/Cabal -i/home/jswaine/ghc/ghc-6.10.1/libraries/filepath -i/home/jswaine/ghc/ghc-6.10.1/libraries/hpc --make cabal-bin -o cabal-bin ghc: missing -B option make[1]: *** [cabal-bin] Error 1 make[1]: Leaving directory `/home/jswaine/ghc/ghc-6.10.1/libraries' make: *** [stage1] Error 2 This is a highly mysterious error. Did you clean the tree before rebuilding? Always a good idea to do a 'make distclean' if you're not sure what state your build tree is in. If you get this from a clean build, then please put up a complete log of the build somewhere and we'll try to diagnose. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Trouble with ghc on linux x86_64
Bit Connor wrote: The system is a xen virtual machine: Linux 2.6.24-19 SMP x86_64 Dual-Core AMD Opteron(tm) Processor 2212 AuthenticAMD GNU/Linux With GHC version 6.8.2, ghci gives the error: $ ghci GHCi, version 6.8.2: http://www.haskell.org/ghc/ :? for help ghc-6.8.2: internal error: R_X86_64_32S relocation out of range: (noname) = 0x7f10a29e56d0 (GHC version 6.8.2 for x86_64_unknown_linux) Please report this as a GHC bug: http://www.haskell.org/ghc/reportabug Aborted Yes, this is a known bug. Xen doesn't respect the MAP_32BIT flag to mmap(), which GHC on x86_64 relies on. http://hackage.haskell.org/trac/ghc/ticket/2512 GHC version 6.10.1 gives a similar error: $ ghci GHCi, version 6.10.1: http://www.haskell.org/ghc/ :? for help ghc: internal error: mmap() returned memory outside 2Gb (GHC version 6.10.1 for x86_64_unknown_linux) Please report this as a GHC bug: http://www.haskell.org/ghc/reportabug Aborted > These same 2 results happen on 2 different distros that I've tried. With both GHC versions, compiling haskell programs sometimes works, but sometimes hangs during the linking stage. Compiling and running executables should work fine, only GHCi is affected by the above bug. Could you try -v when linking and see what stage is hanging? Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: mmap() returned memory outside 2Gb - GHC on ubuntu hardy amd64
Rahul Kapoor wrote: I cannot get GHC to work on a fresh ubuntu hardy machine: On installing using the package manager I get an error when running ghci: "R_X86_64_32S relocation out of range:" - similar to http://hackage.haskell.org/trac/ghc/ticket/2013, but that bug is FreeBSD specific. I decided to install the binary distribution: after which running ghci fails with error: GHCi, version 6.10.1: http://www.haskell.org/ghc/ :? for help ghc: internal error: mmap() returned memory outside 2Gb (GHC version 6.10.1 for x86_64_unknown_linux) Please report this as a GHC bug: http://www.haskell.org/ghc/reportabug Aborted I Added issues #2779 to Trac for the second case. Are there any possible work arounds for these problems? Are you running under Xen by any chance? http://hackage.haskell.org/trac/ghc/ticket/2512 Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: mmap() returned memory outside 2Gb - GHC on ubuntu hardy amd64
Rahul Kapoor wrote: Are you running under Xen by any chance? Yes I am. This is on a SliceHost instance. Is there a tentative release date for 6.10.2? About 3 months after the 6.10.1 release was the tentative plan. Is anyone with access to a Xen instance able to work on this bug? It would need a hack similar to the one required for http://hackage.haskell.org/trac/ghc/ticket/2063 namely picking an area of the address space to try to mmap() from. (of course, ideally the Xen folks would fix their kernel to respect MAP_32BIT...) Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Linking to Haskell code from an external program
Colin Paul Adams wrote: Embarassing - I simply forgot to include Fib.o in the link. So it links now (program crashes, but I can try to sort that out). I'm still intereted in knowing how to automatically get the list of required libraries. "Colin" == Colin Paul Adams <[EMAIL PROTECTED]> writes: Colin> I am trying to call a Haskell function from an Eiffel Colin> program, using C as an intermediary. Colin> For starters, I compiled and ran a variation of the program Colin> shown in Colin> http://haskell.org/haskellwiki/Calling_Haskell_from_C, to Colin> make sure I had the C-code right. Colin> I then attempted to move it into Eiffel. I can compile the Colin> C code OK, but I'm running into problems with linking. Colin> I solved most of the problems by adding the -v flag to the Colin> call to ghc which I used to link the original (haskell + c Colin> only) program, and cut-and-paste the linker options from Colin> their into the Eiffel configuration file. This isn't really Colin> satisfactory - I would like some automatic way to determine Colin> what the flags should be. The only other way I can think of is to construct the arguments yourself by querying the package database, e.g. "ghc-pkg field ld-options rts", but you'll have to combine the information from several different fields of the packages you use, and basically reproduce what GHC does to construct the ld command line. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Crash in garbage collector
Colin Paul Adams wrote: "Ian" == Ian Lynagh <[EMAIL PROTECTED]> writes: Ian> If you were lucky it would abort with an assertion Ian> failure. Anyway, gdb should now have debugging symbols to Ian> work with. It already did (I passed the -g option to gcc). I guess I will need to install the GHC source to get file listings. That sounds like hard work. I think I'll wait for 6.10.1 . If you're calling a Haskell library from C, the problem might be that you forgot hs_add_root(), or that your hs_add_root() isn't pointing to the right module (it should point to the module that transitively imports all the other modules in your library). You can use multiple hs_add_root() calls if necessary. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: mmap() returned memory outside 2Gb - GHC on ubuntu hardy amd64
Rahul Kapoor wrote: I cannot get GHC to work on a fresh ubuntu hardy machine: On installing using the package manager I get an error when running ghci: "R_X86_64_32S relocation out of range:" - similar to http://hackage.haskell.org/trac/ghc/ticket/2013, but that bug is FreeBSD specific. I decided to install the binary distribution: after which running ghci fails with error: GHCi, version 6.10.1: http://www.haskell.org/ghc/ :? for help ghc: internal error: mmap() returned memory outside 2Gb (GHC version 6.10.1 for x86_64_unknown_linux) Please report this as a GHC bug: http://www.haskell.org/ghc/reportabug Aborted I Added issues #2779 to Trac for the second case. Are there any possible work arounds for these problems? I've made a speculative fix. If you're able to test it, that would be very helpful: http://hackage.haskell.org/trac/ghc/ticket/2063 Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: more libedit.so.0 issues
James Swaine wrote: we tried it exactly as you describe below (twice). after it failed the first time, we deleted everything, redownloaded, and tried again. but i know the process works - i've done it successfully on two other machines (though this is the only red hat machine i've ever attempted this on). are there any flags i need to pass to enable verbose logging, or does the build process always log everything? also - where do these log files go, and where should i post them? Do something like this: (./configure && make) 2>&1 | tee buid-log Then post your build-log wherever you like, and send us a pointer to it. Or just gzip it and send it to me or Igloo by email. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Trouble with ghc on linux x86_64
Bit Connor wrote: On Fri, Nov 14, 2008 at 5:36 AM, Simon Marlow <[EMAIL PROTECTED]> wrote: Compiling and running executables should work fine, only GHCi is affected by the above bug. Could you try -v when linking and see what stage is hanging? Here is an example with a standard cabal Setup.lhs file. It outputs the following and then hangs. When I look at "top" then there is an "ld" process taking up almost zero cpu, but hundreds of MB of memory. Since it seems to be hanging in the linker, and this is a Xen instance, perhaps you're running out of memory and swapping? BTW, I've checked in a potential fix for the mmap() problem you reported earlier, see http://hackage.haskell.org/trac/ghc/ticket/2063 Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Cannot escape from a black hole in ghci 6.10.1
Ahn, Ki Yung wrote: > Dear GHC users > > I have both GHC 6.8.2 (debian unstable package) and > GHC 6.8.10 (binary distribution from ghc homepage) installed. > > I was able to escape from a black hole in ghci 6.8.2 with Ctrl-C. > For example, > > GHCi, version 6.8.2: http://www.haskell.org/ghc/ :? for help > Loading package base ... linking ... done. > Prelude> let x = x + x > Prelude> x > ^CInterrupted. > Prelude> x > *** Exception: stack overflow > Prelude> > > Interestingly, you will get a stack overflow when you try to escape from > the black home for the second time. > > In ghci 6.10.1, I am not able to escape from a black hole with Ctrl-C. > > GHCi, version 6.10.1: http://www.haskell.org/ghc/ :? for help > Loading package ghc-prim ... linking ... done. > Loading package integer ... linking ... done. > Loading package base ... linking ... done. > Prelude> let x = x + x > Prelude> x > ^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C ... (doesn't work) > > You can't escape from the black hole and ghci will start > eating up your memory unless kill the ghci process. > Now the black hole behaves like a real black hole :-) > > Is this a ticketed bug, or should we make a new ticket? Yes, this is the same bug as #2783 (fixed yesterday) and probably also #2780 (I still have to test the fix against that program). It'll be fixed in 6.10.2. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: 6.10.1 Bug?
Dominic Steinitz wrote: Dominic Steinitz wrote: According to the hackage page, the Haskell Cryptography Library has a build failure. I couldn't find a bug reference when I searched for milestone 6.10.1 on trac. Should I report it? Has it been fixed? Thanks, Dominic. http://hackage.haskell.org/packages/archive/Crypto/4.1.0/logs/failure/ghc-6.10 [4 of 4] Compiling Main ( SHA1Test.hs, dist/build/SHA1Test/SHA1Test-tmp/Main.o ) ghc: panic! (the 'impossible' happened) (GHC version 6.10.1 for i386-unknown-linux): RegAllocLinear.getStackSlotFor: out of stack slots, try -fregs-graph Please report this as a GHC bug: http://www.haskell.org/ghc/reportabug I found it. It was filed under milestone 6.10.2 (clearly I don't understand milestones). http://hackage.haskell.org/trac/ghc/ticket/2753 It seems a bit vague on what's happening. Is it going to be fixed? Has it been fixed? We'll do something for 6.10.2, yes. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: language features in 6.10.1
Serge D. Mechveliani wrote: The GHC team writes The (Interactive) Glasgow Haskell Compiler -- version 6.10.1 [..] There have been a number of significant changes since the last major release, including: * Some new language features have been implemented: * Record syntax: wild-card patterns, punning, and field disambiguation * Generalised quasi-quotes * Generalised list comprehensions * View patterns [..] Please, where and in what chapters these 4 language features are documented? Thank you in advance for help, Direct links to the docs are in the release notes: http://www.haskell.org/ghc/docs/latest/html/users_guide/release-6-10-1.html Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Unicode's greek lambda
Duncan Coutts wrote: On Tue, 2008-11-18 at 11:51 +, Ross Paterson wrote: On Tue, Nov 18, 2008 at 10:30:01AM +, Malcolm Wallace wrote: When the -XUnicodeSyntax option is specified, GHC accepts some Unicode characters including left/right arrows. Unfortunately, the letter "greek lambda" cannot be used. Are there any technical reasons to not accept it? The "greek lambda" is a normal lower-case alphabetic character - it can be used in identifier names. But it could be a reserved word synonymous with \. After all, \ can occur in operator symbols, but the operator \ is reserved. Presumably that would let you do (\ x -> ...) but not (\x -> ) since the "\x" would run together and lexically it would be one identifier. Exactly. Here's the relevant patch: Tue Jan 16 16:11:00 GMT 2007 Simon Marlow <[EMAIL PROTECTED]> * Remove special lambda unicode character, it didn't work anyway Since lambda is a lower-case letter, it's debatable whether we want to steal it to mean lambda in Haskell source. However if we did, then we would probably want to make it a "special" symbol, not just a reserved symbol, otherwise writing \x->... (using unicode characters of course) wouldn't work, because \x would be treated as a single identifier, you'd need a space. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: ANNOUNCE: GHC version 6.10.1 - EditLine / terminal incompatibility?
Philip Hölzenspies wrote: Something that I would personally very much enjoy is a clear and unmistakable warning from configure when editline is not found. After installing libedit-devel (my ghci-6.10.1 finally has comfortable editing now), I checked config.log; there's not even a mention of libedit capabilities. Maybe a general summary at the end of configure of all the optional capabilities that were or were not configured. As an inspiration, look at gtk2hs' configure output. Good idea - I've added it to the list of things to do in the new build system. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: mmap() returned memory outside 2Gb - GHC on ubuntu hardy amd64
Rahul Kapoor wrote: I've made a speculative fix. If you're able to test it, that would be very helpful: http://hackage.haskell.org/trac/ghc/ticket/2063 Cheers, Simon I had no luck installing the binary snapshot (libedit problems, which I worked around by creating a link to the version I had) after which I started get libc version problems so I gave up on that built the HEAD snapshot from source. running ghci in-place now gives: GHCi, version 6.11.20081117: http://www.haskell.org/ghc/ :? for help Loading package ghc-prim ... linking ... done. ghc: internal error: loadObj: failed to mmap() memory below 2Gb; asked for 57344 bytes at 0x0x4009fa00, got 0x0x7f3a9b2b8000. Try specifying an address with +RTS -xm -RTS (GHC version 6.11.20081117 for x86_64_unknown_linux) I am not really sure what I for -xm should be. Ok, so we got a bit further, but it looks like at some point mmap() stopped giving us memory where we wanted it, perhaps because something else was already mapped at that address. It would be useful to get a look at the memory map. Can you load up GHCi in gdb, as follows. First, find out the path to your GHC binary and the -B option it needs: $ cat `which ghc` #!/bin/sh exec /home/simonmar/fp/lib/x86_64-unknown-linux/ghc-6.10.1/ghc -B/home/simonmar/fp/lib/x86_64-unknown-linux/ghc-6.10.1/. -dynload wrapped ${1+"$@"} Here, /home/simonmar/fp/lib/x86_64-unknown-linux/ghc-6.10.1/ghc is my GHC binary, and -B/home/simonmar/fp/lib/x86_64-unknown-linux/ghc-6.10.1/. is the option I need to pass. Now start gdb: $ gdb /home/simonmar/fp/lib/x86_64-unknown-linux/ghc-6.10.1/ghc and run GHC: (gdb) run --interactive -B/home/simonmar/fp/lib/x86_64-unknown-linux/ghc-6.10.1/. It will now crash with the error you saw above. At this point we need to look at the memory map. Find the PID of the GHC process, by running 'ps uxw' in a terminal (don't quit gdb). Then $ cat /proc//maps where is the PID of the GHC process running in gdb, and send me the output. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: ANNOUNCE: GHC version 6.10.1 - EditLine / terminal incompatibility?
Duncan Coutts wrote: On Wed, 2008-11-19 at 20:45 +0100, Philip Hölzenspies wrote: On Nov 19, 2008, at 6:25 PM, Simon Peyton-Jones wrote: Would it be worth adding this hard-won lore somewhere on a Wiki where it can be found later? Dear Simon, all, I don't think logging a specific option on the Wiki is particularly useful (maybe you would have a default editrc file), because, of course, everybody has their own particular wishes. I don't have any particular wishes. I just want it to "work" where "work" means does what the rest of my system does. I never configured readline, it came with sensible defaults (possibly set by my linux system distributor). What I really think is that we should add back optional readline support. People building closed source ghc binaries would not use it but linux distros could enable it and provide a better "out of the box" experience. As I understand it there would be no licencing problems with that approach. One downside I can see is that it gives us an extra configuration to test and maintain. It's hard enough keeping one line-editing binding working, let alone two! It's true that editline seems to have brought a bunch of headaches with it, though. Perhaps Haskelline is the way to go in the future. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: mmap() returned memory outside 2Gb - GHC on ubuntu hardy amd64
Rahul Kapoor wrote: Attached is the memory map of ghc process running under gdb on a xen instance. Thanks! I think I see the bug. I've just pushed another patch ("round the size up to a page in mmapForLinker() instead of in the caller"), if you could try it out that would be great. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: mmap() returned memory outside 2Gb - GHC on ubuntu hardy amd64
Rahul Kapoor wrote: Thanks! I think I see the bug. I've just pushed another patch ("round the size up to a page in mmapForLinker() instead of in the caller"), if you could try it out that would be great. ghci from ghc-6.11.20081120 works without problems on my xen instance! Nice! Thanks for helping out. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: pseq strictness properties
Duncan Coutts wrote: On Fri, 2008-11-21 at 10:53 +, Simon Marlow wrote: The docs probably shouldn't say anything about what the compiler sees, it should stick to what the programmer sees. Duncan - do you want to try rewording it? Hmm, though the difference is really in what transformations you want the compiler not to do. I would not say the operational behaviour is actually different. Used in isolation there's really no way to distinguish them, even using trace and other tricks to observe the evaluation order. Yes, to be more precise, the difference is in what *guarantees* you get about operational behaviour. You might be able to observe a difference with trace, or you might not, depending on what the compiler did. Using trace you will always observe pseq's first argument evaluated before its second, that's not true of seq. I guess we can try to simplify it to something like "evaluation happens here" (pseq) vs "evaluation happens here or before" (seq). Ok, but we need to be careful: it would be wrong to talk about ordering at all with respect to seq, since it tells you nothing about ordering. The implementation might be using a non-lazy evaluation strategy, for example. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: pseq strictness properties
Duncan Coutts wrote: I don't think I'm just speaking for myself when I say that pseq is confusing and the docs similarly. Given the type a -> b -> b we would assume that it is lazy in it's first arg and strict in the second. (Even in the presence of seq we know that it really really must be strict in it's second arg since it returns it or _|_ in which case it's still strict). Of course we know of the seq primitive with this type that is strict in both. However we also now have pseq that has the _opposite_ "static" strictness to the original expected strictness. Statically, pseq claims that it's strict in the first but lazy in the parameter that it _returns_. At runtime of course it is strict in both parameters. Given the need for pseq I would have expected pseq to statically be lazy in it's first argument (and actually be strict at runtime). I expected it'd statically be strict in the second arg. So I'm wondering if there is a good explanation for pseq having the opposite strictness properties to that which I expected. At first, even after reading the docs I assumed that it was just a typo and that it really meant it was lazy in the first arg (statically). I was just recording a patch to "fix" the documentation when I checked the underlying code and found to my surprise that the original docs are correct. I also think the docs need to be clarified to make this distinction between the actual strictness behaviour and what the compiler thinks it is during strictness analysis. Since they are different I think it's an important distinction to make. Phil Trinder and others have been working on formalising the difference between pseq and seq, they might be able to tell you more. But here's how I think of it: Denotationally, pseq and seq are the interchangeable. That's important, it tells you what the strictness of pseq is: it's the same as seq. The difference between the two is entirely at the operational level, where we can talk about the order of evaluation. 'pseq x y' causes both x and y to be evaluated, and in the absence of any other demands on x and y, x will be evaluated earlier than y. This isn't quite the same as saying "pseq x y" evaluates x and then y, although in many cases that is what happens. The compiler is still free to move the evaluation of x earlier. It might also be the case that y is already evaluated, so it's not true to say that x is always evaluated before y. In order to make pseq work like this, we have to prevent GHC from performing certain transformations at compile-time. That is because otherwise, GHC is allowed to make any transformations that respect the denotation of the program, but with pseq we want a guarantee at the operational level. We want to prevent GHC from re-ordering evaluation such that y (or any "part of y" if y is a larger expression) is evaluated before x, but GHC can only do that if it knows that y is strictly demanded by pseq. So by removing this information, using the lazy annotation, we prevent GHC from performing the offending transformations. The docs probably shouldn't say anything about what the compiler sees, it should stick to what the programmer sees. Duncan - do you want to try rewording it? Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: GHCi debugger status
Daniil Elovkov wrote: I'd like to know, how do ghc developers and users feel about the debugger? I'm using it to some extent on ghc 6.8.2 and find it useful. But I'm getting an impression that it's not too stable. I'm not filing any bug report yet, just want to know how it feels for others. I used to make it panic. I think, it was due to existential types. Now I see it mess up the list of bindings in a funny way. For example, in a previous trace session I had a variable, say, prev. It was bound during pattern matching in a function, say, prevFunc. Now I'm having another trace session, actually stepping from the very beginning. A couple of steps after the beginning, prev suddenly appears in the bindings where prevFunc absolutely has not yet been invoked. It's completely unrelated. In 'show bindings' prev has a wrong type - SomeType (it's ADT). Its real type (when in scope) is [t]. I ask 'length prev' and get 0 :) So, what is your impression of the debugger? Please file bug reports! Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: ghci-haskeline (was: Re: ANNOUNCE: GHC version 6.10.1 - EditLine / terminal incompatibility?)
Judah Jacobson wrote: On Thu, Nov 20, 2008 at 7:16 AM, Ian Lynagh <[EMAIL PROTECTED]> wrote: Although another option would be to make GHCi a separate (GPL) frontend to the (BSD) GHC API. The only downside is that (with static linking) we have another large binary. Another upside is that other GHC API users don't get an editline dependency. I've actually been experimenting with something similar: darcs get http://code.haskell.org/~judah/ghci-haskeline/ If you have ghc-6.10.1, 'cabal install'ing inside that repo will give you a version of ghci which uses Haskeline as its backend. Basically, I copied 4 modules from the GHC source tree (GhciMonad, InteractiveUI, GhciTags and Main) and modifed them to use Haskeline; the rest of GHC is obtained through the API. Current benefits over the readline/editline versions: - Works on Windows I can attest to that. Nice going Judah! $ cabal update $ darcs get http://code.haskell.org/~judah/ghci-haskeline/ $ cd ghci-haskeline $ cabal install and I have a GHCi on Windows that can do completion, history search, and exits when I hit ^D. That's made my day. BTW, your LICENSE file looks like it was copied from the GHC source tree, it still has various references to GHC and the University of Glasgow. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: ANNOUNCE: GHC version 6.10.1 - EditLine / terminal incompatibility?
Duncan Coutts wrote: On Thu, 2008-11-20 at 12:51 +, Simon Marlow wrote: Duncan Coutts wrote: What I really think is that we should add back optional readline support. People building closed source ghc binaries would not use it but linux distros could enable it and provide a better "out of the box" experience. As I understand it there would be no licencing problems with that approach. One downside I can see is that it gives us an extra configuration to test and maintain. It's hard enough keeping one line-editing binding working, let alone two! It's true that editline seems to have brought a bunch of headaches with it, though. Perhaps Haskelline is the way to go in the future. My selfish suggestion is that we maintain the readline configuration and let the people who originally wanted editline support do the work to maintain that configuration. I get the feeling they don't care about if it works well, just that it's got the license they want (which is a perfectly reasonable position). I propose we do this: * extract the GHCi UI from the GHC package, put it in the ghc-bin package (maybe we should rename this package to ghc-main or something). This removes the editline and bytestring (for now) dependencies from the GHC package. * Move to Haskeline for the default build. We have to bring in terminfo and utf8-string as bootlibs. This gives us line-editing on Windows, and removes problematic external C library dependencies. * Make it possible to compile the ghc-bin package against readline. Upload ghc-bin to Hackage, so people can say cabal install ghc-bin -f readline and get a GHCi built against readline if they want. Oops - except that this would mean that the ghc-main package has a variant license. So maybe we have to have a separate ghc-readline package? Ok? Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: GHCi debugger status
Peter Hercek wrote: Daniil Elovkov wrote: I'd like to know, how do ghc developers and users feel about the debugger? Sometimes it is better/quicker than "printf debugging" :) Now I see it mess up the list of bindings in a funny way. For example, in a previous trace session I had a variable, say, prev. It was bound during pattern matching in a function, say, prevFunc. Now I'm having another trace session, actually stepping from the very beginning. A couple of steps after the beginning, prev suddenly appears in the bindings where prevFunc absolutely has not yet been invoked. It's completely unrelated. In 'show bindings' prev has a wrong type - SomeType (it's ADT). Its real type (when in scope) is [t]. I ask 'length prev' and get 0 :) It is supposed to show only free variables in the selected expression. I'm sure I had cases when I was able to access variables which were not free in the selected expression but which would have been in scope if used in the selected expression. The values available seemed correct (contrary to your case). I thought it was a step to get all the variables in scope to be visible but later I learned it is not feasible and my lucky experience was probably a bug. If I encounter it again should I fill a bug report? I mean: is it really a bug? At the least it's suspicious, and could be a bug. Please do report it, as long as you can describe exactly how to reproduce the problem. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: GHCi debugger status
Claus Reinke wrote: >> It is supposed to show only free variables in the selected expression. I'm sure I had cases when I was able to access variables which were not free in the selected expression but which would have been in scope if used in the selected expression. The values available seemed correct (contrary to your case). I thought it was a step to get all the variables in scope to be visible but later I learned it is not feasible and my lucky experience was probably a bug. If I encounter it again should I fill a bug report? I mean: is it really a bug? Perhaps someone could help me to understand how the debugger is supposed to be used, as I tend to have this problem, too: - when I'm at a break point, I'd really like to see the current scope or, if that is too expensive, the next enclosing scope, in full (not only would that tell me what instantiation of my code I'm in, it would also seem necessary if I want to reconstruct what the current expression is) I don't understand what you mean here - surely in order to "reconstruct what the current expression is" you only need to know the values of the free variables of that expression? Also I don't understand what you mean by the "next enclosing scope". Could you give an example? - with only the current expression and some of its free variables accessible, I seem to be unable to use the debugger effectively (it would be less of a problem if the current expression could be displayed in partially reduced source form, instead of partially defined value form) I don't know what "partially defined value form" is. The debugger shows the source code as... source code! I think I know what you mean though. You want the debugger to substitute the values of free variables in the source code, do some reductions, and show you the result. Unfortunately this is quite hard to implement in GHCi (but I agree it might be useful), because GHCi is not a source-level interpreter. Currently, I only use the debugger rarely, and almost always have to switch to trace/etc to pin down what it is hinting at. What is the intended usage pattern that makes the debugger more effective/ convenient than trace/etc, and usable without resorting to the latter? Set a breakpoint on an expression that has the variable you're interested in free, and display its value. If your variable isn't free in the expression you want to set a breakpoint on, then you can add a dummy reference to it. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: ANNOUNCE: GHC version 6.10.1 - EditLine / terminal incompatibility?
Duncan Coutts wrote: On Fri, 2008-11-21 at 14:01 +, Simon Marlow wrote: I propose we do this: * extract the GHCi UI from the GHC package, put it in the ghc-bin package (maybe we should rename this package to ghc-main or something). This removes the editline and bytestring (for now) dependencies from the GHC package. This is good independently of the other suggestions. * Move to Haskeline for the default build. We have to bring in terminfo and utf8-string as bootlibs. This gives us line-editing on Windows, and removes problematic external C library dependencies. I think this is worth trying. It seems like Judah is prepared to put the work in to make haskeline work on various platforms and to trim the dependencies. Eg I'd rather us decide what to do about Unicode input rather than just end up de-facto standardising the utf8-string package. It seemed like we had a consensus on what to do for the H98 IO with Unicode. We just need to get on with it. Yes - I did some work on this, but didn't finish it. Unfortunately trying to shoehorn encodings into the IO library isn't pretty, and I became a bit disillusioned. I could probably still get it working, but I think I was really hoping that a better architecture for the IO library might emerge, and so far it didn't :-) > In addition we need to agree some control over encoding when using a specific encoding is called for (eg reading a file that is known to be UTF-16 independent of the locale). Right, this will certainly be possible when we have System.IO doing encoding/decoding, we just need to agree on an API. Oops - except that this would mean that the ghc-main package has a variant license. So maybe we have to have a separate ghc-readline package? A variant license isn't a fundamental technical problem though perhaps the consensus is that variant licenses are a "bad thing". I'm not sure. One would have to use OtherLicense and specify what the conditions are in the license file. I think it's a good idea to avoid variant licenses, especially in libraries. We want it to be easy for someone to know whether they're complying with the licenses for the libraries they depend on, and if those licenses depend on choices made at the time the library was built, then it's much harder. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: ANNOUNCE: GHC version 6.10.1 - EditLine / terminal incompatibility?
Simon Marlow wrote: I think it's a good idea to avoid variant licenses, especially in libraries. We want it to be easy for someone to know whether they're complying with the licenses for the libraries they depend on, and if those licenses depend on choices made at the time the library was built, then it's much harder. Oh, I should add that ghci-bin (or whatever) is an application, not a library, so it's less of an issue for it to have a variant license. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Linking to Haskell code from an external program
Colin Paul Adams wrote: "Simon" == Simon Peyton-Jones <[EMAIL PROTECTED]> writes: Simon> It looks as if you are somehow failing to link your binary Simon> with package 'base'. (Are you using 'ghc' as your linker; Simon> you should be.) But others are better than I at this kind Simon> of stuff. I have base-4.0.0.0 specified. No, I am not using ghc as the linker. Since I am calling the Haskell routine from an Eiffel program, I don't see how I could do that with any sort of ease. I think you'll have to post complete instructions to reproduce the problem you're having; it's hard to piece it together from the information you've given. The problem will be in the details somewhere. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Backspace/delete/history in ghci
Colin Paul Adams wrote: I can't use the backspace key, or arrow keys in ghci (6.10.1) as i would normally expect to on any program on Linux. Is this connected with the readline/editline/haskelline debate I have observed recently? You seem to be encountering an unusually severe reaction to the editline switchover. What terminal type is this? Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Can't compile GHC 6.8.2
Barney Stratford wrote: I'm trying to compile GHC 6.8.2 using my existing GHC 6.2, but the typechecker refuses to compile. The problem seems to be that the hi-boot files in compiler/typecheck contain some incorrect type signatures. I've fixed most of them, but TcMatches.hi-boot-6 has slightly stumped me. As it stands, it says tcMatchesFun :: Name.Name -> HsExpr.MatchGroup Name.Name -> TcType.BoxyRhoType -> TcRnTypes.TcM (HsBinds.HsWrapper, HsExpr.MatchGroup TcRnTypes.TcId) but it should say something like tcMatchesFun :: Name.Name -> Bool -> HsExpr.MatchGroup Name.Name -> TcType.BoxyRhoType -> TcRnTypes.TcM (HsBinds.HsWrapper, HsExpr.MatchGroup TcRnTypes.TcId) Unfortunately, that doesn't work, as it assumes I meant TcMatches.Bool, so I tried saying Prelude.Bool instead. Now I get the complaint that Prelude.Bool isn't in scope. Has anyone else seen this issue? I've looked for answers in the docs and with Google, but no luck. I think we only supported using GHC 6.4 for building 6.8. Using 6.2 might be possible, but no guarantees. To answer your question above, you probably want GHC.Base.Bool (hi-boot files used to need "original names", that is, the module that originally defined a thing, which might be different from the module you normally get it from). Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Linking to Haskell code from an external program
Colin Paul Adams wrote: "Simon" == Simon Marlow <[EMAIL PROTECTED]> writes: Simon> Colin Paul Adams wrote: >>>>>>> "Simon" == Simon Peyton-Jones <[EMAIL PROTECTED]> >>>>>>> writes: >> Simon> It looks as if you are somehow failing to link your binary Simon> with package 'base'. (Are you using 'ghc' as your linker; Simon> you should be.) But others are better than I at this kind Simon> of stuff. >> >> I have base-4.0.0.0 specified. >> >> No, I am not using ghc as the linker. Since I am calling the >> Haskell routine from an Eiffel program, I don't see how I could >> do that with any sort of ease. Simon> I think you'll have to post complete instructions to Simon> reproduce the problem you're having; it's hard to piece it Simon> together from the information you've given. The problem Simon> will be in the details somewhere. It seems it was some complication when switching from 6.8 to 6.10. I didn't clean up properly. Links now, but I'm still getting the crash in the garbage collector. :-( Perhaps try reducing the example until the problem goes away, so we can see at which stage it gets introduced? Or can you boil down your example to something we can reproduce? Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Can't compile GHC 6.8.2
Barney Stratford wrote: There's good news and bad news. The good news is that the compilation of my shiny almost-new GHC is complete. The bad news is, it won't link. It's grumbling about ld: /System/Fink/src/fink.build/ghc-6.8.2-1/ghc-6.8.2/rts/libHSrts.a(PrimOps.o) has external relocation entries in non-writable section (__TEXT,__text) for symbols: ___gmpn_cmp ___gmpn_gcd_1 ___gmpz_fdiv_qr ___gmpz_init ___gmpz_tdiv_qr ___gmpz_com ___gmpz_xor ___gmpz_ior ___gmpz_and ___gmpz_divexact ___gmpz_tdiv_r ___gmpz_tdiv_q ___gmpz_gcd ___gmpz_mul ___gmpz_sub ___gmpz_add Looking through the archives, it seems like this is an old favourite bug rearing its ugly head again. The suggested remedy of commenting out the line reading "SRC_HC_OPTS += -fvia-C" in the Makefile won't work, as that line isn't there anyway. The manpage for ld has an option "-read_only_relocs warning" that looks like it might suppress this problem, but I don't fully understand what it's doing. What would you suggest? The workaround is almost certainly to build the RTS with -fasm. However, I'd be really pleased if someone could get to the bottom of this, so that we can fix the root cause. You can get a look at the .s file generated for PrimOps by saying $ cd rts $ make PrimOps.s (make sure you're still compiling via-C). Have a look at the references to those symbols, do they look suspicious at all? Try compiling the same module with -fasm, and compare the .s file with the via-C version. Also you could try poking around in the .o file with objdump, and see what relocation entries are being generated in both cases. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: GHCi debugger status
Claus Reinke wrote: fun x y = let f1 = ... (f2 x) ... -- f1 calls f2 f2 x = x * 2 in case x of 1 -> f2 0 _ -> f2 (f1 y) g x = let z = (some complex computation) in z `div` x main = print (g (fun 1 2)) This is a classical example of why laziness gets in the way of debugging. Now, when (f2 0) gets finally evaluated and throws a divByZero error, x and y are nowhere to be found. Since we do not have a real dynamic stack, it is difficult to say where their values should be stored. The only place I can think of is at the breakpoint itself, but then as Simon said this poses a serious efficiency problem. Isn't that a case of premature optimization? I will happily compain about performance issues later, after the debugger turns out to be merely too slow!-) No, it's a real problem. If we retained all the variables in scope at every breakpoint, GHCi would grow a whole bunch of space leaks. It's pretty important that adding debugging shouldn't change the space behaviour of the program. Of course, constant factors are fine, but we're talking asymptotic changes here. Now perhaps it would be possible to arrange that the extra variables are only retained if they are needed by a future breakpoint, but that's tricky (conditional stubbing of variables), and :trace effectively enables all breakpoints so you get the space leaks back. A similar argument applies to keeping the dynamic stack. The problem with the dynamic stack is that it doesn't look much like you expect, due to tail-calls. However, I think it would be good to let the user browse the dynamic stack (somehow, I haven't thought through how hard this would be). But what I'd really like is to give the user access to the *lexical* stack, by re-using the functionality that we already have for tracking the lexical stack in the profiler. See http://hackage.haskell.org/trac/ghc/wiki/ExplicitCallStack > (btw, is there a debugger home page on the wiki, where issues/FAQs like "why can't I have scope contexts" are documented?) No, please by all means start one. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: GHCi debugger status
Claus Reinke wrote: Consider this code and session: f x y z | x ... Things to note: - when reaching the breakpoint "in" 'f', one isn't actually in 'f' yet - nothing about 'f' can be inspected - at no point in the session was 'x' inspectable, even though it is likely to contain information needed to understand 'f', especially when we are deep in a recursion of a function that can be called from many places; this information doesn't happen to be needed in the current branch, butdebugging the current expression always happens in a context, and accessing information about this context is what the GHCi debugger doesn't seem to support well In this particular example, the second item is most likely a bug (the free variables of the guard were never offered for inspection). Indeed it was a bug, the same as #2740, and I've just fixed it. Thanks for boiling it down to a nice small example. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: Can't compile GHC 6.8.2
Barney Stratford wrote: The workaround is almost certainly to build the RTS with -fasm. According to http://www.haskell.org/ghc/docs/latest/html/users_guide/options-phases.html#options-codegen, -fasm is the default option anyway, or am I missing something here? It looks like this is an issue with the way Apple's version of gcc works. The gmp people are complaining bitterly, saying not to use a Mac at all. Do you have a link to something describing the problem? I didn't turn up anything that looked directly relevant, only this: http://gmplib.org/macos.html which seems to suggest that problems can be worked around with flags to gcc. We don't use -fno-pic. Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Re: GHCi debugger status
Claus Reinke wrote: No, it's a real problem. If we retained all the variables in scope at every breakpoint, GHCi would grow a whole bunch of space leaks. It's pretty important that adding debugging shouldn't change the space behaviour of the program. Of course, constant factors are fine, but we're talking asymptotic changes here. Now perhaps it would be possible to arrange that the extra variables are only retained if they are needed by a future breakpoint, but that's tricky (conditional stubbing of variables), and :trace effectively enables all breakpoints so you get the space leaks back. Then how about my suggestion for selectively adding lexical scope to breakpoints? I'd like to be able to say :break {names} and have GHCi make the necessary changes to keep {names} available for inspection when it hits that breakpoint. The only easy way to do that is to recompile the module that contains the breakpoint. To do it without recompiling is about as hard as doing what I suggested above, because it involves a similar mechanism (being able to selectively retain the values of free variables). Cheers, Simon ___ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users