On Fri, 2007-11-09 at 21:56 +0000, Claus Reinke wrote: > > Nice. So I suppose the --simple-output flag works with it? > > yes, it is just like 'list package', only it matches on > exposedModules rather than package id. > > note: if you apply the patch, you will want to add find-module > to 'usageHeader' as well - seems i missed that. > > > The solution we were thinking of for cabal though would be for Cabal to > > ask ghc-pkg for all of its information and then Cabal can perform > > whatever queries it needs rather than making lots of calls to ghc-pkg. > > http://hackage.haskell.org/trac/ghc/ticket/1839 > > Ticket #1839 (need ghc-pkg dump feature) > > what you're really saying is that you'd like to relegate ghc-pkg > to a format conversion and do all the work in cabal. is it then > still sensible to have two separate tools, or should cabal take > over completely?
I don't think that's necessary. All we need is a bit of cooperation between the tools. GHC can record whatever info it likes and store it in whatever format it likes. But we do need some info on the installed packages. We can do that now by calling ghc-pkg list and ghc-pkg describe on each one. ghc-pkg describe returns a standard format defined in Distribution.InstalledPackageInfo in the Cabal library. All we're asking for is a slightly faster way of doing that. When we have 150 installed packages calling ghc-pkg 150 times will take three and a half minutes (when I had 150 hackage packages installed I clocked ghc-pkg at 1.5s to describe any package). Even if ghc-pkg were faster, it's still just more convenient to do queries ourselves than by asking ghc-pkg all the time. > > As you suggest it would be great for Cabal to tell you that you're > > missing package dependencies rather than letting ghc stumble over the > > problem with a correspondingly more confusing error message. Indeed we > > could do nice things like building without needing a .cabal file or > > automatically generating a minimal .cabal file by inspecting local files > > and the package dbs. > > http://hackage.haskell.org/trac/hackage/ticket/171 > > Ticket #171 (Add command to automagically create an initial cabal > > package) > > we had a discussion about ghci vs cabal a while back: > > http://www.haskell.org/pipermail/glasgow-haskell-users/2006-November/011554.html Mm, that'd be cool. The advantage of Cabal doing that is that it can handle any configure steps and manage pre-processors. BTW, this does not have to be in the Cabal lib itself. It can be in front-end tools like cabal-install. That way we keep the size and dependencies of the Cabal lib lower. > and the topic keeps coming up in different contexts: > > http://www.haskell.org/pipermail/glasgow-haskell-users/2007-February/012035.html > > it took me a while to find these two again - perhaps your > cabal ticket is the place to keep these urls? Do please file feature requests in the Cabal/Hackage trac: http://hackage.haskell.org/trac/hackage/ As I see it, all of these cool features depend on Cabal being based on a make-like system internally and doing dependency chasing itself. If you follow cabal-devel you'll see we've started on some prototyping work in that direction and we would welcome people to join in the fun. > in current ghc head, there are several related ghci commands: > > - ':show modules' (currently loaded modules) > - ':show packages' (package options + packages loaded) > - ':set' (now shows dynamic flag settings) > > so you could load your main module into ghci, let it figure out > the module dependencies and packages, and use those > commands to contribute to an initial cabal file. you'd still need > to eliminate non-exposed modules and implicit dependencies > in packages, as well as flags that are at default settings. If Cabal can do dependency chasing then it can figure out the required modules and packages. It could go further than what ghci can do by also finding what pre-processors are necessary and the ultimate source files for each module. > that leaves other settings, like paths, which brings us back > to the earlier question and emails: > > for any non-trivial project depending on non-standard > settings, shouldn't ghci work from a cabal file, or ghci > be started by cabal, or ghci and cabal collaborate, to > record those settings from the start and uniformly? > > it makes no sense to re-enter .cabal settings in ghci just > because it can't load .cabal files, and it makes no sense > to write .cabal files from scratch when you've already > constructed a workable ghci session for your project. I agree. > the solution i've in mind is something like the ability > to import and export ghci sessions as .cabal files. Interesting. That relates somewhat to this feature request: http://hackage.haskell.org/trac/hackage/ticket/171 > enabling ghci to explain all its session settings is one > way to get there, and since ghci is currently my main > interface to haskell code, i tend to add to that path. i > just patched ghc-pkg because it seemed sensible and > simple, but i also have a patch to make ':info <module>' > work in ghci. > > do you have a specification of what cabal needs from > an implementation, how it would like to query for that > information, and an idea of how ghc, hugs, nhc, etc. > would implement that spec? We have some rough general ideas. Certainly a collection of InstalledPackageInfo records is enough information. We need to be able to do things like map module names to packages and possibly to individual files to be able to track changes in installed packages which might require a rebuild of the current code. > for ghc, one might be tempted to ask for an augmented > -M, but the prevalence of hugs and ghci seems to suggest > an "import/export session as .cabal" route as necessary, > if not sufficient. then there's nhc, which left dependency > chasing to a separate nhc-make tool (wasn't that one > of the starting points for cabal?), and i don't know about > the other implementations. ghc -M is unfortunately not quite sufficient because of the search directory shadowing problem. Suppose I've split my sources across two directories a/ and b/ and I use a module Foo which I initially keep in b/Foo.hs. Then ghc -M will find b/Foo.hs and resolve imports of module Foo to that file. If I then drop a/Foo.hs and only look at my existing dependency graph then I'll get the wrong result compared to making clean and regenerating everything. This is a classic bug that most makefile systems suffer from because gcc -M only produces the final current dep graph, it does not say what locations it looked in. So for accuracy I think it is important for Cabal to do the dependency chasing itself. we need to do it in Cabal anyway for pre-processors. So, yes, I think we should be more ambitious and build a slicker system. Duncan _______________________________________________ Cvs-ghc mailing list [email protected] http://www.haskell.org/mailman/listinfo/cvs-ghc
