Out of curiosity: Do you have a better solution for Haddock, if the
requirement is to be able to understand GHC-specific code? Perhaps one
could avoid having to modify the parser, and instead try to match
Haddock-comments/declarations by their SrcLocs.

naively, i'd think that haddock adds functionality to ghc, so if
ghc does its job, and passes comments it can't handle as comment
strings in the ast, then haddock could make another pass interpreting
some of those comments and weaving them with info from the symbol
table, or definitions located near those comments. that way, ghc would also never fail with syntax errors in comments.

Perhaps a better solution would be to represent the documentation by
Dynamics in GHC's abstract syntax, and to pass in the functions that parse
and rename the documentation annotations, perhaps in the DynFlags.
Good idea. Sounds like it would work to me.

just curious: what makes this tighter integration, or the use
of Dynamics, necessary?

We should also move the Haddock comments to the .hi files, which Duncan
likes to point out. For those that didn't know: Haddock currently needs to
itself invoke GHC's type checking on a module to get the comments. Getting
them from the .hi files would be much more compositional.

that sounds as if you'd also need the source locations in the .hi files, to match up comments with symbols? that would be nice, because i'd like to have the source locations there for other uses, such as :ctags/:etags (*), :info, :edit, :browse, etc!-)

claus

(*) actually, since emacs thinks that line feeds terminate
paragraphs, not lines, and are anyway best ignored, one cannot construct a precise etags file from line numbers alone.. but emacs makes up for that by searching..

_______________________________________________
Cvs-ghc mailing list
[email protected]
http://www.haskell.org/mailman/listinfo/cvs-ghc

Reply via email to