Re: [Haskell-cafe] some way to reverse engineer lambda expressions out of the debugger?
tphyahoo: > > I am a newbie learning haskell. (First forum post.) > > I am wondering if there is a trick to get debugging information about > functions out of the environment (which for me, for now, is ghci). > > In this example, > > *UnixTools> :t map (*) [1,2] > map (*) [1,2] :: (Num a) => [a -> a] > > This is very nice, but I would *really* like to see something like > > *UnixTools> explodeLambda( map (*) [1,2] ) > [(\x -> 1*x),(\x -> 2*x)] > > Yes, maybe I'm dreaming, but I would like haskell to reverse engineer / > pretty print lambda expressions for me. You can use 'hat' to trace/reduce expressions. http://www.cs.york.ac.uk/fp/hat/ The new ghci debugger can print closures too, but I'm not sure if it does what you want. All very possible, maybe a little experimental though. -- Don ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] some way to reverse engineer lambda expressions out of the debugger?
I am a newbie learning haskell. (First forum post.) I am wondering if there is a trick to get debugging information about functions out of the environment (which for me, for now, is ghci). In this example, *UnixTools> :t map (*) [1,2] map (*) [1,2] :: (Num a) => [a -> a] This is very nice, but I would *really* like to see something like *UnixTools> explodeLambda( map (*) [1,2] ) [(\x -> 1*x),(\x -> 2*x)] Yes, maybe I'm dreaming, but I would like haskell to reverse engineer / pretty print lambda expressions for me. (Note that: *UnixTools> map ($ 5 ) [(\x -> 1*x),(\x -> 2*x)] [5,10] *UnixTools> map ($ 5) ( map (*) [1..2] ) [5,10] So these expressions really are the same, only it could be argued that the first expression is in some sense easier to read if you are debugging something complex. ) I would like to have something like "Data::Dumper" from perl, but of course, on steroids. Is something like this possible, or be worked on? Or probably never going to happen? Cheers, thomas. -- View this message in context: http://www.nabble.com/some-way-to-reverse-engineer-lambda-expressions-out-of-the-debugger--tf2897954.html#a8096545 Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Possible (GHC or HGL) bug or ??
Dear haskell-cafe patrons, I've been working through an exercise in Hudak's _The Haskell School of Expression_ (ex. 3.2, creating a snowflake fractal image), and am seeing some strange drawing behavior that I'm hoping somebody can shed some light on. My initial solution is below (it requires HGL for Graphics.SOE): module Main where import Graphics.SOE main = runGraphics ( do w <- openWindow "Snowflake Fractal" (600, 600) fillStar w 300 125 256 (cycle $ enumFrom Blue) spaceClose w ) spaceClose w = do k <- getKey w if k == ' ' then closeWindow w else spaceClose w minSize = 2 :: Int fillStar :: Window -> Int -> Int -> Int -> [Color] -> IO () fillStar w x y h clrs | h <= minSize = return () fillStar w x y h clrs = do drawInWindow w (withColor (head clrs) (polygon [t1p1,t1p2,t1p3,t1p1])) drawInWindow w (withColor (head clrs) (polygon [t2p1,t2p2,t2p3,t2p1])) sequence_ $ map recur [t1p1,t1p2,t1p3,t2p1,t2p2,t2p3] where tanPiOverSix = tan(pi/6) :: Float halfSide = truncate $ tanPiOverSix * fromIntegral h hFrag = truncate $ tanPiOverSix * tanPiOverSix * fromIntegral h (t1p1,t1p2,t1p3) = ((x, y), (x-halfSide, y+h),(x+halfSide, y+h)) (t2p1,t2p2,t2p3) = ((x-halfSide, y+hFrag),(x, y+h+hFrag),(x+halfSide, y+hFrag)) reVert y = y - ((h - hFrag) `div` 3) recur pnt = fillStar w (fst pnt) (reVert (snd pnt)) (h`div`3) (tail clrs) This basically works, in that it does exactly what I want in Hugs, but GHC sometimes pauses partway through rendering, and does not continue rendering until I type any key (except space, which exits) or unfocus/refocus the window, or move the mouse pointer across the window. Sometimes, more often the first time in a GHCI session, it renders completely with no pauses, and it seems to pause more and more if I evaluate main, then close the window, evaluate again in the same GHCI session, repeatedly. The same pausing behavior is observed in a GHC-compiled executable. When the problem occurs, there is a message to the console that says: "thread blocked indefinitely". Versioning info: CPU: Pentium M OS: Gentoo GNU/Linux, kernel 2.6.18 GCC: 4.1.1 GHC: 6.6 HGL: 3.1 HUGS: March 2005 [all software compiled from source using gentoo ebuilds] Is anybody else familiar with this behavior? If not, any suggestions as to where I should file this as a potential bug? GHC? HGL? Both? Elsewhere? Thanks in advance for any information. Calvin p.s. Any stylistic or other comments about the code welcome too. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Literate Haskell source files. How do I turn them into something I can read?
On Sun, 2006-31-12 at 16:52 -0800, Iavor Diatchki wrote: > I also dislike Haskell code that contains LaTeX macros as it makes > reading the comments more difficult (and I know both Haskell and > LaTeX). Also converting the Haskell code to pdf is probably not a > good option because you cannot use all the usual tools in your editor: > I don't read code in the same way as I read a book. The PDF isn't ideal, but it is better than what I've got now -- a PDF file would be readable, you see. Part of the problem with trying to read the .lhs files (the ones not marked with the simpler markup '>') is that the comments sections -- the part that's supposed to document the code to make it understandable -- seems to be threaded with macro substitution calls (or whatever it is called in latex). So I'll see something like this: In favour of omitting \tr{!B!}, \tr{!C!}: - {\em May} save a heap overflow test, if ...A... allocates anything. The other advantage of this is that we can use relative addressing from a single Hp to get at all the closures so allocated. Looking at this I'm seeing what appears to be some kind of latex variable name being expanded with what looks like a macro called "\tr". So what is this mysterious "B"? I have no idea. The code blocks both before and after this comment don't seem to show anything that this B-expansion would be turned into. At least if I could get the PDF output the macro expansion would be replaced with whatever name is in B's stead. > There are two ways to mark code. Using the one > markup, code lines start with >. This format I'm familiar with and can read readily. (I don't see the point of it -- what does this buy me that -{ }- blocks don't? -- but I can read it without too much difficulty.) > This is a comment again. The second way to mark code is to place it > between \begin{code} and \end{code}. For example: > \begin{code} > main = print "This is code" > pi = 3.14 > \end{code} > > An here we have comments again. This is the stuff that's hurting. The problem is that to my unschooled-in-Haskell eyes, the markup and the Haskell source blur together into a soup of executable line noise. The comments prove unhelpful because of the macro expansions I can't decode (like that "B" thing above) and the actual Haskell source can easily get lost in the mix. In the darcs source code, for example, there will be literally pages of comments (the user manual is part of the source code) with a little five-line block of code here, a ten-line block there. It's very easy to overlook the code in the sea of macro-expanded commentary. Being able to take the source and run it through something that formats the comments and the code differently and clearly (and expanding macros along the way) would render the whole thing far more readable (read: readable) even if I do lose the ability to easily navigate through the source to modify it for experiments, etc. The ideal world would be something that expands the latex code -- macro expansion in particular -- and strips the formatting code so that the code and the commentary are clearly separated but both are readable in a plain text editor. A good second place would be a way to actually take these .lhs files and make them PDFs (or DVIs or PSs or even HTMLs) so that at least the macros get expanded and the comments aren't interrupted by formatting code. A distant third place would be to strip the comments away and leave just the raw code behind -- but without the long, distracting gaps that unlit leaves. Now the distant third I can do thanks to the (excessively snarky IMO) comment Tony Finch left behind. I would like to know, however, if there is any way for me to get my second-place or even first-place options filled. Like a working command line for pdflatex? Or something better? And me? I'm going to use XML for literate Haskell. ;) -- Michael T. Richter Email: [EMAIL PROTECTED], [EMAIL PROTECTED] MSN: [EMAIL PROTECTED], [EMAIL PROTECTED]; YIM: michael_richter_1966; AIM: YanJiahua1966; ICQ: 241960658; Jabber: [EMAIL PROTECTED] "I have no purpose, directly or indirectly, to interfere with the institution of slavery in the States where it exists." --Abraham Lincoln smiley-4.png Description: PNG image signature.asc Description: This is a digitally signed message part ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Literate Haskell source files. How do I turn them into something I can read?
Hi, I also dislike Haskell code that contains LaTeX macros as it makes reading the comments more difficult (and I know both Haskell and LaTeX). Also converting the Haskell code to pdf is probably not a good option because you cannot use all the usual tools in your editor: I don't read code in the same way as I read a book. Anyways, I would leave the source alone but here is how you can determine which parts are code and which are comments (I also find it useful to use the highlighting of my editor which highlights comments and code differently). Literate Haskell scripts usually have the extension .lhs and in them the convention of what is code and what is comment is reversed: everything is a comment by default and code is marked specially. There are two ways to mark code. Using the one markup, code lines start with >. For example: This is a comment but the lines bellow contain code: main = print "This is code" -- a normal comment within a code block pi = 3.14 This is a comment again. The second way to mark code is to place it between \begin{code} and \end{code}. For example: \begin{code} main = print "This is code" pi = 3.14 \end{code} An here we have comments again. Personally, I prefer the first form of markup but, I guess, other people like the second form so Haskell provides both, which may be confusing, Hope this helps and Happy New Year to everyone! -Iavor On 12/31/06, Michael T. Richter <[EMAIL PROTECTED]> wrote: On Sat, 2006-30-12 at 02:57 -0500, Cale Gibbard wrote: > Assuming that it's LaTeX-based literate source, you usually run > pdflatex on it to get a pdf of the code, but I'm not familiar with the > darcs code in particular, and whether anything special needs to be > done, or whether they have a specialised build for that. It appears to be the same markup used in the GHC compiler source code (which does not bode well for my future reading of the GHC source either). Running it on the darcs source code generates several dozen pages (I'm not exaggerating!) of error messages and no dvi, ps or pdf files. Playing around with various command line options doesn't help. Running it on the GHC source code generates simpler error messages, but error messages nonetheless. Then it dumps me in some kind of interactive mode. Here's some sample output: =8<= This is pdfeTeX, Version 3.141592-1.21a-2.2 (Web2C 7.5.4) entering extended mode (./CgCon.lhs LaTeX2e <2003/12/01> Babel and hyphenation patterns for american, french, german, ngerman, b ahasa, basque, bulgarian, catalan, croatian, czech, danish, dutch, esperanto, e stonian, finnish, greek, icelandic, irish, italian, latin, magyar, norsk, polis h, portuges, romanian, russian, serbian, slovak, slovene, spanish, swedish, tur kish, ukrainian, nohyphenation, loaded. ! Undefined control sequence. l.4 \section [CgCon]{Code generation for constructors} ? =8<= I don't know LaTeX (if that's what this is) at all and I don't know Haskell sufficiently comfortably to actually distinguish reliably between LaTeX code and Haskell, so the direct .lhs source code is basically useless to me. What's the trick people use to read it? -- Michael T. Richter Email: [EMAIL PROTECTED], [EMAIL PROTECTED] MSN: [EMAIL PROTECTED], [EMAIL PROTECTED]; YIM: michael_richter_1966; AIM: YanJiahua1966; ICQ: 241960658; Jabber: [EMAIL PROTECTED] "Thanks to the Court's decision, only clean Indians or colored people other than Kaffirs, can now travel in the trams." --Mahatma Gandhi ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: Re[8]: [Haskell-cafe] Strange type behavior in GHCi 6.4.2
What Kirsten said. I think you can be much more productive in optimizing your code if you actually understand what's going on. I usually don't go as far as looking at compiler intermediate code; I usually stick with profiling (or look at assembly code if it's a really performance critical inner loop). Then you can start optimizing. That can be by changing algorithm, changing data representation, strictness annotations, etc. It can also be by inserting some INLINE or SPECIALIZE pragmas, but that's more rare (don't get me wrong about those pragmas, I introduced them in Haskell with hbc). But I think just adding pragmas willy-nilly is a bad idea; I find that most serious performance problems cannot be solved by those means, instead you need a higher level approach. -- Lennart On Dec 31, 2006, at 11:47 , Kirsten Chevalier wrote: On 12/31/06, Bulat Ziganshin <[EMAIL PROTECTED]> wrote: this don't say anything place. and these rules have their own source: it's hard to optimize using your path. but when program optimization is just adding a few options/pragmas to the program, it' becomes cheap enough to change these rules. didn't you thought about it? In my experience, adding pragmas and toying with options without insight into what they do is not "cheap", because it takes up the programmer's time, and time is more important than anything else. Every minute spent typing in pragmas is a minute lost that could have been spent thinking about how to write your code more elegantly, and in my experience -- and again, maybe it's just that I'm slow -- adding pragmas doesn't help. When it comes to inlining and specializing, GHC tends to be smarter than I am. (Once more, maybe it's just that I'm slow.) I'd rather focus my energies on doing the things GHC can't (usually) do, like replacing an O(n^2) algorithm with an O(log n) algorithm. Cheers, Kirsten -- Kirsten Chevalier* [EMAIL PROTECTED] *Often in error, never in doubt "Happy is all in your head / When you wake up and you're not dead / It's a sign of maturation / That you've lowered your expectations..."-- Barbara Kessler ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Seeking advice on a style question
>> In summary, I think that the dependencies on the pagemaster are not >> adequate, he mixes too many concerns that should be separated. > > True, but then that's even more miscellaneous bits and pieces to carry > around. I guess what makes me uncomfortable is that when I'm writing > down a function like process1 (not its real name, as you might imagine), > I want to concentrate on the high-level data flow and the steps of the > transformation. I don't want to have to exposes all of the little bits > and pieces that aren't really relevant to the high-level picture. > Obviously, in the definitions of the functions that make up process1, > those details become important, but all of that should be internal to > those function definitions. Yes, we want to get rid of the bits and pieces. Your actual code is between two extremes that both manage to get rid of them. One extreme is the "universal" structure like you already noted: > Alternatively, I can wrap all of the state up into a single universal > structure that holds everything I will ever need at every step, but > doing so seems to me to fly in the face of strong typing; at the early > stages of processing, the structure will have "holes" in it that don't > contain useful values and shouldn't be accessed. Currently, (pagemaster) has tendencies to become such a universal beast. The other extreme is the one I favor: the whole pipeline is expressible as a chain of function compositions via (.). One should be able to write process = rectangles2pages . questions2rectangles This means that (rectangles2pages) comes from a (self written) layout library and that (questions2rectangles) comes from a question formatting library and both concern are completely separated from each other. If such a factorization can be achieved, you get clear semantics, bug reduction and code reuse for free. Of course, the main problem is: the factorization does not arise by coding, only by thinking. Often the situation is as following and I for myself encounter it again and again: one starts with an abstraction along function composition but it quickly turns out, as you noted, that "there are some complicated reasons why that doesn't work". To get working code, one creates some miniature "universal structure" that incorporates all the missing data that makes the thing work. After some time, the different concerns get more and more intertwined and soon, every data depends on everything else until the code finally gets unmaintainable, it became "monolithic". What can be done? The original problem was that the solutions to the originally separated concerns (layout library and questions2rectangles) simply were not powerful, not general enough. The remedy is to separately increase the power and expressiveness of both libraries until the intended result can be achieved by plugging them together. Admittedly, this is not an easy task. But the outcome is rewarding: by thinking about the often ill-specified problems, one understands them much better and it most often turns out that some implementation details were wrong and so on. In contrast, the ad-hoc approach that introduces miniature "universal structures" does not make the libraries more general, but tries to fit them together by appealing to the special case, the special problem at hand. In my experience, this only makes things worse. The point is: you have to implement the functionality anyway, so you may as well grab some free generalizations and implement it once and for all in an independent and reusable library. I think that the following toy example (inspired by a discussion from this mailing list) shows how to break intertwined data dependencies: foo :: Keyvalue -> (Blueprint, Map') -> (Blueprint', Map) foo x (bp,m') = (insert x bp, uninsert x bp m') The type for (foo) is much too general: it says that foo may mix the (Blueprint) and the (Map') to generate (Blueprint'). But this is not the case, the type for foo introduces data dependencies that are not present at all. A better version would be foo' :: Keyvalue -> Blueprint -> (Blueprint', Map' -> Map) foo' x bp = (insert x bp, \m' -> uninsert x bp m') Here, it is clear that the resulting (Map) depends on (blueprint) and (Map'), but that the resulting (Blueprint') does not depend on (map'). The point relevant to your problem is that one can use (foo') in more compositional ways than (foo) simply because the type allows it. For instance, you can recover (insert) from (foo'): insert :: Keyvalue -> Blueprint -> Blueprint' insert x bp = fst $ foo' x bp but this is impossible with (foo).* In the original problem, the type signature for (foo') was that best one could get. But here, the best type signature is of course foo'' :: ( Keyvalue -> Blueprint -> Blueprint' , Keyvalue -> Blueprint -> Map' -> Map ) foo'' = (insert, uninsert) because in essence, (foo) is just the pair (insert, uninsert). One morale from the above example is that functio
[Haskell-cafe] Second Call for Papers: TFP 2007, New York, USA
CALL FOR PAPERS Trends in Functional Programming 2007 New York, USA April 2-4, 2007 http://cs.shu.edu/tfp2007/ The symposium on Trends in Functional Programming (TFP) is an international forum for researchers with interests in all aspects of functional programming languages, focusing on providing a broad view of current and future trends in Functional Programming. It aspires to be a lively environment for presenting the latest research results through acceptance by extended abstracts. A formal post-symposium refereeing process then selects the best articles presented at the symposium for publication in a high-profile volume. TFP 2007 is co-hosted by Seton Hall University and The City College of New York (CCNY) and will be held in New York, USA, April 2-4, 2007 at the CCNY campus. SCOPE OF THE SYMPOSIUM The symposium recognizes that new trends may arise through various routes. As part of the Symposium's focus on trends we therefore identify the following five article categories. High-quality articles are solicited in any of these categories: Research Articles leading-edge, previously unpublished research work Position Articles on what new trends should or should not be Project Articlesdescriptions of recently started new projects Evaluation Articles what lessons can be drawn from a finished project Overview Articles summarizing work with respect to a trendy subject Articles must be original and not submitted for simultaneous publication to any other forum. They may consider any aspect of functional programming: theoretical, implementation-oriented, or more experience-oriented. Applications of functional programming techniques to other languages are also within the scope of the symposium. Articles on the following subject areas are particularly welcomed: o Dependently Typed Functional Programming o Validation and Verification of Functional Programs o Debugging for Functional Languages o Functional Programming and Security o Functional Programming and Mobility o Functional Programming to Animate/Prototype/Implement Systems from Formal or Semi-Formal Specifications o Functional Languages for Telecommunications Applications o Functional Languages for Embedded Systems o Functional Programming Applied to Global Computing o Functional GRIDs o Functional Programming Ideas in Imperative or Object-Oriented Settings (and the converse) o Interoperability with Imperative Programming Languages o Novel Memory Management Techniques o Parallel/Concurrent Functional Languages o Program Transformation Techniques o Empirical Performance Studies o Abstract/Virtual Machines and Compilers for Functional Languages o New Implementation Strategies o any new emerging trend in the functional programming area If you are in doubt on whether your article is within the scope of TFP, please contact the TFP 2007 program chair, Marco T. Morazan, at [EMAIL PROTECTED] SUBMISSION AND DRAFT PROCEEDINGS Acceptance of articles for presentation at the symposium is based on the review of extended abstracts (6 to 10 pages in length) by the program committee. Accepted abstracts are to be completed to full papers before the symposium for publication in the draft proceedings and on-line. Further details can be found at the TFP 2007 website. POST-SYMPOSIUM REFEREEING AND PUBLICATION In addition to the draft symposium proceedings, we intend to continue the TFP tradition of publishing a high-quality subset of contributions in the Intellect series on Trends in Functional Programming. IMPORTANT DATES Abstract Submission: February 1, 2007 Notification of Acceptance: February 20, 2007 Registration Deadline: March 2, 2007 Camera Ready Full Paper Due: March 9, 2007 TFP Symposium: April 2-4, 2007 PROGRAMME COMMITTEE John Clements California Polytechnic State University, USA Marko van Eekelen Radboud Universiteit Nijmegen, The Netherlands Benjamin Goldberg New York University, USA Kevin Hammond University of St. Andrews, UK Patricia Johann Rutgers University, USA Hans-Wolfgang Loidl Ludwig-Maximilians Universität München, Germany Rita Loogen Philipps-Universität Marburg, Germany Greg Michaelson Heriot-Watt University, UK Marco T. Morazán (Chair)Seton Hall University, USA Henrik Nilsson University of Nottingham, UK Chris Okasaki United States Military Academy at West Point, USA Rex PageUniversity of Oklahoma, USA Ricardo PenaUniversidad Complutense de Madrid, Spain Benjamin C. Pierce University of Pennsylvania, USA John Reppy University of Chicago, USA Ulrik P. SchultzUniversity of Southern Denmark, Denmark Clara SeguraUniversidad Complutense de
Re: [Haskell-cafe] Literate Haskell source files. How do I turn them into something I can read?
On Sun, 31 Dec 2006, Michael T. Richter wrote: > > So what is the right approach (and, for that matter, what is the wider > problem)? I thought that was clear from the other replies. Tony. -- f.a.n.finch <[EMAIL PROTECTED]> http://dotat.at/ HEBRIDES: CYCLONIC BECOMING WEST 6 TO GALE 8, OCCASIONALLY SEVERE GALE 9. ROUGH OR VERY ROUGH, OCCASIONALLY HIGH. RAIN THEN SQUALLY SHOWERS. MODERATE OR GOOD. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Literate Haskell source files. How do I turn them into something I can read?
[redirecting to the mailing list] On 12/31/06, Michael T. Richter <[EMAIL PROTECTED]> wrote: On Sat, 2006-30-12 at 23:48 -0800, Kirsten Chevalier wrote: > I'm probably not the right person to explain why David Roundy chose to > write his code the way he did, since I've never even met him. However, > since as of 12/8/2006 there were 115 contributors to darcs, perhaps > reading the source code isn't as difficult as you seem to think it is. What's the secret then? pdflatex vomits. unlit generates a few dozen pages of whitespace per five-line block of code. I'm at a loss to take this "literate" format and turn it into something which I can read. The secret lies in adjusting your perspective. The literate format is readable by humans as is. If there are specific things about it that you find difficult, you can post about them here with *specific* examples you find perplexing. But one reason why you may feel like people aren't being helpful is that your question isn't specific enough. > > I fail to see how making code which can't be read makes it more... > > readable. > "Can't" is an awfully strong word, isn't it? Nope. The source files as they are are gibberish. Again, that's an awfully strong statement. Clearly, there exist any number of people who can read them without finding them gibberish. Perhaps it might be better to listen to what they are telling you than to make contradictory declarations. There's some kind of markup (of a type I can't identify) mixed up with Haskell source. Given that I can't identify the markup and I'm a little hazy on Haskell at this point (part of the point of this exercise was to learn to read Haskell as it's used in real projects), the net effect is executable line noise. You know, like a perl program written circa 3.0. Right, if you're trying to learn to read Haskell as it's used in real projects, reading the .lhs files as they are is the way to go. Again, if you find specific things hard to comprehend, ask here, or on the IRC channel. A general "all Haskell looks like line noise!" complaint is hard for us to answer. > I can only explain how I did it myself, which was through practice. OK. How do you read the literate source code? Which tools do you use to separate the code from the commentary? Which tools actually work to turn .lhs files into something human-readable? (pdflatex failed on the GHC code as well.) My eyes. And brain. I'm sorry that I can't be more helpful, but that's what most other people I've watched reading Haskell code use, too. Cheers, Kirsten -- Kirsten Chevalier* [EMAIL PROTECTED] *Often in error, never in doubt "Memo to myself: Do the dumb things I gotta do. Touch the puppet head." -- They Might Be Giants ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: Re[8]: [Haskell-cafe] Strange type behavior in GHCi 6.4.2
On 12/31/06, Bulat Ziganshin <[EMAIL PROTECTED]> wrote: this don't say anything place. and these rules have their own source: it's hard to optimize using your path. but when program optimization is just adding a few options/pragmas to the program, it' becomes cheap enough to change these rules. didn't you thought about it? In my experience, adding pragmas and toying with options without insight into what they do is not "cheap", because it takes up the programmer's time, and time is more important than anything else. Every minute spent typing in pragmas is a minute lost that could have been spent thinking about how to write your code more elegantly, and in my experience -- and again, maybe it's just that I'm slow -- adding pragmas doesn't help. When it comes to inlining and specializing, GHC tends to be smarter than I am. (Once more, maybe it's just that I'm slow.) I'd rather focus my energies on doing the things GHC can't (usually) do, like replacing an O(n^2) algorithm with an O(log n) algorithm. Cheers, Kirsten -- Kirsten Chevalier* [EMAIL PROTECTED] *Often in error, never in doubt "Happy is all in your head / When you wake up and you're not dead / It's a sign of maturation / That you've lowered your expectations..."--Barbara Kessler ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re[8]: [Haskell-cafe] Strange type behavior in GHCi 6.4.2
Hello Lennart, Sunday, December 31, 2006, 2:48:01 PM, you wrote: > Oh, I have other arguments against pragmas. :) > But I think the best one is that optimization applied in the wrong > place is just poor software engineering. > As Michael A. Jackson said: > The First Rule of Program Optimization: Don't do it. > The Second Rule of Program Optimization (for experts only!): Don't do > it yet. this don't say anything place. and these rules have their own source: it's hard to optimize using your path. but when program optimization is just adding a few options/pragmas to the program, it' becomes cheap enough to change these rules. didn't you thought about it? -- Best regards, Bulatmailto:[EMAIL PROTECTED] ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] porting ghc
Hi Brian, On Mon, Dec 18, 2006 at 10:07:19PM -0800, Brian McQueen wrote: > I was trying to get a ghc going in my shell account the other day and > found that the data at > http://haskell.org/ghc/docs/6.6/html/building/sec-porting-ghc.html > didn't apply at all. > > The system is a netbsd alpha which turns up as alpha-unknown-netbsd > through configure. > > I didn't find any configure.in to modify, but there is a config.guess. > Is that what I'm supposed to modify? It should say configure.ac; I've just updated the doc sources accordingly. If you search for "alpha" in that file then you should find the stanzas. Just copy/paste a similar one and update it for Alpha NetBSD. Thanks Ian ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: Re[6]: [Haskell-cafe] Strange type behavior in GHCi 6.4.2
On Dec 31, 2006, at 6:48 , Lennart Augustsson wrote: Oh, I have other arguments against pragmas. :) But I think the best one is that optimization applied in the wrong place is just poor software engineering. As Michael A. Jackson said: The First Rule of Program Optimization: Don't do it. The Second Rule of Program Optimization (for experts only!): Don't do it yet. /me is moderately mystified that an experienced programmer hasn't discovered that one the hard way yet. -- brandon s. allbery[linux,solaris,freebsd,perl] [EMAIL PROTECTED] system administrator [openafs,heimdal,too many hats] [EMAIL PROTECTED] electrical and computer engineering, carnegie mellon universityKF8NH ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: Re[6]: [Haskell-cafe] Strange type behavior in GHCi 6.4.2
Oh, I have other arguments against pragmas. :) But I think the best one is that optimization applied in the wrong place is just poor software engineering. As Michael A. Jackson said: The First Rule of Program Optimization: Don't do it. The Second Rule of Program Optimization (for experts only!): Don't do it yet. -- Lennart On Dec 31, 2006, at 04:12 , Bulat Ziganshin wrote: Hello Lennart, Saturday, December 30, 2006, 5:27:01 PM, you wrote: Maybe it's simpler to add a lot of INLINE, but that can make a program slower as well as faster. i think that probability of this is much lower :) if you don't like pragmas you may try to find other arguments ;) -- Best regards, Bulatmailto:[EMAIL PROTECTED] ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Literate Haskell source files. How do I turn them into something I can read?
On Sat, 2006-30-12 at 02:57 -0500, Cale Gibbard wrote: > Assuming that it's LaTeX-based literate source, you usually run > pdflatex on it to get a pdf of the code, but I'm not familiar with the > darcs code in particular, and whether anything special needs to be > done, or whether they have a specialised build for that. It appears to be the same markup used in the GHC compiler source code (which does not bode well for my future reading of the GHC source either). Running it on the darcs source code generates several dozen pages (I'm not exaggerating!) of error messages and no dvi, ps or pdf files. Playing around with various command line options doesn't help. Running it on the GHC source code generates simpler error messages, but error messages nonetheless. Then it dumps me in some kind of interactive mode. Here's some sample output: =8<= This is pdfeTeX, Version 3.141592-1.21a-2.2 (Web2C 7.5.4) entering extended mode (./CgCon.lhs LaTeX2e <2003/12/01> Babel and hyphenation patterns for american, french, german, ngerman, b ahasa, basque, bulgarian, catalan, croatian, czech, danish, dutch, esperanto, e stonian, finnish, greek, icelandic, irish, italian, latin, magyar, norsk, polis h, portuges, romanian, russian, serbian, slovak, slovene, spanish, swedish, tur kish, ukrainian, nohyphenation, loaded. ! Undefined control sequence. l.4 \section [CgCon]{Code generation for constructors} ? =8<= I don't know LaTeX (if that's what this is) at all and I don't know Haskell sufficiently comfortably to actually distinguish reliably between LaTeX code and Haskell, so the direct .lhs source code is basically useless to me. What's the trick people use to read it? -- Michael T. Richter Email: [EMAIL PROTECTED], [EMAIL PROTECTED] MSN: [EMAIL PROTECTED], [EMAIL PROTECTED]; YIM: michael_richter_1966; AIM: YanJiahua1966; ICQ: 241960658; Jabber: [EMAIL PROTECTED] "Thanks to the Court's decision, only clean Indians or colored people other than Kaffirs, can now travel in the trams." --Mahatma Gandhi signature.asc Description: This is a digitally signed message part ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Literate Haskell source files. How do I turn them into something I can read?
On Sat, 2006-30-12 at 17:13 +, Tony Finch wrote: > > Apparently the GHC compiler can take .lhs files, strip them with "unlit" > > (a utility which I finally found buried deep in the GHC installation -- > > off-path) and then compile them normally. The problem I have is that > > unlit leaves behind instead these huge gaping (and highly distracting) > > stretches of whitespace while it takes out the markup. > uniq will solve this part of your problem, but you're probably taking the > wrong approach to the wider problem. So what is the right approach (and, for that matter, what is the wider problem)? -- Michael T. Richter Email: [EMAIL PROTECTED], [EMAIL PROTECTED] MSN: [EMAIL PROTECTED], [EMAIL PROTECTED]; YIM: michael_richter_1966; AIM: YanJiahua1966; ICQ: 241960658; Jabber: [EMAIL PROTECTED] "I think it is very beautiful for the poor to accept their lot [...]. I think the world is being much helped by the suffering of the poor people." --Mother Theresa signature.asc Description: This is a digitally signed message part ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Mo' comprehensions
Hello Diego, Saturday, December 30, 2006, 6:05:46 PM, you wrote: > Maybe there should be a Comprehensible class that's automatically > mapped to comprehension syntax. It's rather odd to have them only for > lists. That would be both more general and more elegant than just > bringing back monad comprehensions. > Is there any obvious reason why this wouldn't work? "it will make error messages harder to understand for novices", "it will make execution slower", "it will make programs more buggy" - standard list of arguments against making anything in Haskell more polymorphic :) -- Best regards, Bulatmailto:[EMAIL PROTECTED] ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re[6]: [Haskell-cafe] Strange type behavior in GHCi 6.4.2
Hello Kirsten, Saturday, December 30, 2006, 6:23:09 PM, you wrote: > I agree that profiling and reading code dumps can be daunting, but in > my opinion, it's better to learn these skills once and for all (and > unfortunately, these skills are still necessary given the current > level of Haskell technology) and gain insight into how to use the > compiler to get the code you want than to practice cargo-cult > programming in the form of wanton pragmas. i'm agree - you will need to try this once yourself before you will decide to do that i propose :) -- Best regards, Bulatmailto:[EMAIL PROTECTED] ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re[6]: [Haskell-cafe] Strange type behavior in GHCi 6.4.2
Hello Lennart, Saturday, December 30, 2006, 5:27:01 PM, you wrote: > Maybe it's simpler to add a lot of INLINE, but that can make a program > slower as well as faster. i think that probability of this is much lower :) if you don't like pragmas you may try to find other arguments ;) -- Best regards, Bulatmailto:[EMAIL PROTECTED] ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe