Re: [Haskell-cafe] Re: Why can't Haskell be faster?
On Nov 3, 2007 5:00 AM, Ryan Dickie [EMAIL PROTECTED] wrote: Lossless File compression, AKA entropy coding, attempts to maximize the amount of information per bit (or byte) to be as close to the entropy as possible. Basically, gzip is measuring (approximating) the amount of information contained in the code. Hmmm, interesting idea. I think it would be interesting to compare the ratios between raw file size its entropy (we can come up with a precise metric later). This would show us how concise the language and code actually is. Yeah, let's all write in bytecode using a hex editor :-D ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why can't Haskell be faster?
Sebastian Sylvan [EMAIL PROTECTED] writes: [LOC vs gz as a program complexity metric] Obviously no simple measure is going to satisfy everyone, but I think the gzip measure is more even handed across a range of languages. It probably more closely aproximates the amount of mental effort [..] I'm not sure I follow that reasoning? At any rate, I think the ICFP contest is much better as a measure of productivity. But, just like for performance, LOC for the shootout can be used as a micro-benchmark. Personally I think syntactic noise is highly distracting, and semantic noise is even worse! This is important - productivity doesn't depend so much on the actual typing, but the ease of refactoring, identifying and fixing bugs, i.e *reading* code. Verbosity means noise, and also lower information content in a screenful of code. I think there were some (Erlang?) papers where they showed a correlation between program size (in LOC), time of development, and possibly number of bugs?) - regardless of language. Token count would be good, but then we'd need a parser for each language, which is quite a bit of work to do... Whatever you do, it'll be an approximation, so why not 'wc -w'? With 'wc -c' for J etc where programs can be written as spaceless sequences of symbols. Or just average chars, words and lines? -k -- If I haven't seen further, it is by standing in the footprints of giants ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: Re[2]: [Haskell-cafe] Re: Why can't Haskell be faster?
On 02/11/2007, Bulat Ziganshin [EMAIL PROTECTED] wrote: Hello Sebastian, Thursday, November 1, 2007, 9:58:45 PM, you wrote: the ideal. Token count would be good, but then we'd need a parser for each language, which is quite a bit of work to do... i think that wc (word count) would be good enough approximation Yes, as long as you police abuse ( eg if(somevar)somfunccall(foo,bar,baz)shouldn't be treated as a single word)). -- Sebastian Sylvan +44(0)7857-300802 UIN: 44640862 ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re[2]: [Haskell-cafe] Re: Why can't Haskell be faster?
Hello Sebastian, Thursday, November 1, 2007, 9:58:45 PM, you wrote: the ideal. Token count would be good, but then we'd need a parser for each language, which is quite a bit of work to do... i think that wc (word count) would be good enough approximation -- Best regards, Bulatmailto:[EMAIL PROTECTED] ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Why can't Haskell be faster?
Ketil Malde wrote: [LOC vs gz as a program complexity metric] Do either of those make sense as a program /complexity/ metric? Seems to me that's reading a lot more into those measurements than we should. It's slightly interesting that, while we're happily opining about LOCs and gz, no one has even tried to show that switching from LOCs to gz made a big difference in those program bulk rankings, or even provided a specific example that they feel shows how gz is misrepresentative - all opinion, no data. (Incidentally LOC measures source code shape as much as anything else - programs in statement heavy languages tend to be longer and thinner, and expression heavy languages tend to be shorter and wider.) __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why can't Haskell be faster?
On 11/2/07, Isaac Gouy [EMAIL PROTECTED] wrote: Ketil Malde wrote: [LOC vs gz as a program complexity metric] Do either of those make sense as a program /complexity/ metric? You're right! We should be using Kolmogorov complexity instead! I'll go write a program to calculate it for the shootout. Oh wait... Luke ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why can't Haskell be faster?
On Friday 02 November 2007 19:03, Isaac Gouy wrote: It's slightly interesting that, while we're happily opining about LOCs and gz, no one has even tried to show that switching from LOCs to gz made a big difference in those program bulk rankings, or even provided a specific example that they feel shows how gz is misrepresentative - all opinion, no data. Why gzip and not run-length encoding, Huffman coding, arithmetic coding, block sorting, PPM etc.? Choosing gzip is completely subjective and there is no logical reason to think that gzipped byte count reflects anything of interest. Why waste any time studying results in such an insanely stupid metric? Best case you'll end up concluding that the added complexity had no adverse effect on the results. In contrast, LOC has obvious objective merits: it reflects the amount of code the developer wrote and the amount of code the developer can see whilst reading code. -- Dr Jon D Harrop, Flying Frog Consultancy Ltd. http://www.ffconsultancy.com/products/?e ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why can't Haskell be faster?
--- Jon Harrop [EMAIL PROTECTED] wrote: On Friday 02 November 2007 19:03, Isaac Gouy wrote: It's slightly interesting that, while we're happily opining about LOCs and gz, no one has even tried to show that switching from LOCs to gz made a big difference in those program bulk rankings, or even provided a specific example that they feel shows how gz is misrepresentative - all opinion, no data. Why gzip and not run-length encoding, Huffman coding, arithmetic coding, block sorting, PPM etc.? Choosing gzip is completely subjective and there is no logical reason to think that gzipped byte count reflects anything of interest. Why waste any time studying results in such an insanely stupid metric? Best case you'll end up concluding that the added complexity had no adverse effect on the results. In contrast, LOC has obvious objective merits: it reflects the amount of code the developer wrote and the amount of code the developer can see whilst reading code. How strange that you've snipped out the source code shape comment that would undermine what you say - obviously LOC doesn't tell you anything about how much stuff is on each line, so it doesn't tell you about the amount of code that was written or the amount of code the developer can see whilst reading code. __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why can't Haskell be faster?
On 11/2/07, Isaac Gouy [EMAIL PROTECTED] wrote: How strange that you've snipped out the source code shape comment that would undermine what you say - obviously LOC doesn't tell you anything about how much stuff is on each line, so it doesn't tell you about the amount of code that was written or the amount of code the developer can see whilst reading code. It still tells you how much content you can see on a given amount of vertical space. I think the point, however, is that while LOC is not perfect, gzip is worse. It's completely arbitrary and favours languages wich requires you to write tons of book keeping (semantic noise) as it will compress down all that redundancy quite a bit (while the programmer would still has to write it, and maintain it). So gzip is even less useful than LOC, as it actively *hides* the very thing you're trying to meassure! You might as well remove it alltogether. Or, as has been suggested, count the number of words in the program. Again, not perfect (it's possible in some languages to write things which has no whitespace, but is still lots of tokens). -- Sebastian Sylvan +44(0)7857-300802 UIN: 44640862 ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why can't Haskell be faster?
--- Sebastian Sylvan [EMAIL PROTECTED] wrote: -snip- It still tells you how much content you can see on a given amount of vertical space. And why would we care about that? :-) I think the point, however, is that while LOC is not perfect, gzip is worse. How do you know? Best case you'll end up concluding that the added complexity had no adverse effect on the results. Best case would be seeing that the results were corrected against bias in favour of long-lines, and ranked programs in a way that looks-right when we look at the program source code side-by-side. It's completely arbitrary and favours languages wich requires you to write tons of book keeping (semantic noise) as it will compress down all that redundancy quite a bit (while the programmer would still has to write it, and maintain it). So gzip is even less useful than LOC, as it actively *hides* the very thing you're trying to meassure! You might as well remove it alltogether. I don't think you've looked at any of the gz rankings, or compared the source code for any of the programs :-) Or, as has been suggested, count the number of words in the program. Again, not perfect (it's possible in some languages to write things which has no whitespace, but is still lots of tokens). Wouldn't that be completely arbitrary? __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why can't Haskell be faster?
igouy2: --- Sebastian Sylvan [EMAIL PROTECTED] wrote: -snip- It still tells you how much content you can see on a given amount of vertical space. And why would we care about that? :-) I think the point, however, is that while LOC is not perfect, gzip is worse. How do you know? Best case you'll end up concluding that the added complexity had no adverse effect on the results. Best case would be seeing that the results were corrected against bias in favour of long-lines, and ranked programs in a way that looks-right when we look at the program source code side-by-side. It's completely arbitrary and favours languages wich requires you to write tons of book keeping (semantic noise) as it will compress down all that redundancy quite a bit (while the programmer would still has to write it, and maintain it). So gzip is even less useful than LOC, as it actively *hides* the very thing you're trying to meassure! You might as well remove it alltogether. I don't think you've looked at any of the gz rankings, or compared the source code for any of the programs :-) Or, as has been suggested, count the number of words in the program. Again, not perfect (it's possible in some languages to write things which has no whitespace, but is still lots of tokens). Wouldn't that be completely arbitrary? I follow the shootout changes fairly often, and the gzip change didn't significantly alter the rankings, though iirc, it did cause perl to drop a few places. Really, its a fine heuristic, given its power/weight ratio. -- Don ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why can't Haskell be faster?
On Friday 02 November 2007 20:29, Isaac Gouy wrote: ...obviously LOC doesn't tell you anything about how much stuff is on each line, so it doesn't tell you about the amount of code that was written or the amount of code the developer can see whilst reading code. Code is almost ubiquitously visualized as a long vertical strip. The width is limited by your screen. Code is then read by scrolling vertically. This is why LOC is a relevant measure: because the area of the code is given by LOC * screen width and is largely unrelated to the subjective amount of stuff on each line. As you say, imperative languages like C are often formatted such that a lot of right-hand screen real estate is wasted. LOC penalizes such wastage. The same cannot be said for gzipped bytes, which is an entirely irrelevant metric... -- Dr Jon D Harrop, Flying Frog Consultancy Ltd. http://www.ffconsultancy.com/products/?e ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why can't Haskell be faster?
while LOC is not perfect, gzip is worse. the gzip change didn't significantly alter the rankings Currently the gzip ratio of C++ to Python is 2.0, which at a glance, wouldn't sell me on a less code argument. Although the rank stayed the same, did the change reduce the magnitude of the victory? Thanks, Greg ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why can't Haskell be faster?
On Friday 02 November 2007 23:53, Isaac Gouy wrote: Best case you'll end up concluding that the added complexity had no adverse effect on the results. Best case would be seeing that the results were corrected against bias in favour of long-lines, and ranked programs in a way that looks-right when we look at the program source code side-by-side. Why would you want to subjectively correct for bias in favour of long lines? Or, as has been suggested, count the number of words in the program. Again, not perfect (it's possible in some languages to write things which has no whitespace, but is still lots of tokens). Wouldn't that be completely arbitrary? That is not an argument in favour of needlessly adding extra complexity and adopting a practically-irrelevant metric. Why not use the byte count of a PNG encoding of a photograph of the source code written out by hand in blue ballpoint pen? -- Dr Jon D Harrop, Flying Frog Consultancy Ltd. http://www.ffconsultancy.com/products/?e ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why can't Haskell be faster?
--- Greg Fitzgerald [EMAIL PROTECTED] wrote: while LOC is not perfect, gzip is worse. the gzip change didn't significantly alter the rankings Currently the gzip ratio of C++ to Python is 2.0, which at a glance, wouldn't sell me on a less code argument. a) you're looking at an average, instead try http://shootout.alioth.debian.org/gp4/benchmark.php?test=alllang=pythonlang2=gpp b) we're not trying to sell you on a less code argument - it's whatever it is Although the rank stayed the same, did the change reduce the magnitude of the victory? c) that will have varied program to program, and do you care which way the magnitude of victory moved or do you care that where it moved to makes more sense? For fun, 2 meteor-contest programs, ratios to the python-2 program LOC GZ WC ghc-3 0.981.401.51 gpp-4 3.764.144.22 Look at the python-2 and ghc-3 source and tell us if LOC gave a reasonable indication of relative program size - is ghc-3 really the smaller program? :-) http://shootout.alioth.debian.org/gp4/benchmark.php?test=meteorlang=allsort=gz __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why can't Haskell be faster?
On 11/2/07, Sterling Clover [EMAIL PROTECTED] wrote: As I understand it, the question is what you want to measure for. gzip is actually pretty good at, precisely because it removes boilerplate, reducing programs to something approximating their complexity. So a higher gzipped size means, at some level, a more complicated algorithm (in the case, maybe, of lower level languages, because there's complexity that's not lifted to the compiler). LOC per language, as I understand it, has been somewhat called into question as a measure of productivity, but there's still a correlation between programmers and LOC across languages even if it wasn't as strong as thought -- on the other hand, bugs per LOC seems to have been fairly strongly debunked as something constant across languages. If you want a measure of the language as a language, I guess LOC/gzipped is a good ratio for how much noise it introduces -- but if you want to measure just pure speed across similar algorithmic implementations, which, as I understand it, is what the shootout is all about, then gzipped actually tends to make some sense. --S ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe Lossless File compression, AKA entropy coding, attempts to maximize the amount of information per bit (or byte) to be as close to the entropy as possible. Basically, gzip is measuring (approximating) the amount of information contained in the code. I think it would be interesting to compare the ratios between raw file size its entropy (we can come up with a precise metric later). This would show us how concise the language and code actually is. --ryan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why can't Haskell be faster?
Don Stewart [EMAIL PROTECTED] writes: goalieca: So in a few years time when GHC has matured we can expect performance to be on par with current Clean? So Clean is a good approximation to peak performance? If I remember the numbers, Clean is pretty close to C for most benchmarks, so I guess it is fair to say it is a good approximation to practical peak performance. Which proves that it is possible to write efficient low-level code in Clean. And remember usually Haskell is competing against 'high level' languages like python for adoption, where we're 5-500x faster anyway... Unfortunately, they replaced line counts with bytes of gzip'ed code -- while the former certainly has its problems, I simply cannot imagine what relevance the latter has (beyond hiding extreme amounts of repetitive boilerplate in certain languages). When we compete against Python and its ilk, we do so for programmer productivity first, and performance second. LOC was a nice measure, and encouraged terser and more idiomatic programs than the current crop of performance-tweaked low-level stuff. BTW, Python isn't so bad, performance wise. Much of what I do consists of reading some files, building up some hashes (associative arrays or finite maps, depending on where you come from :-), and generating some output. Python used to do pretty well here compared to Haskell, with rather efficient hashes and text parsing, although I suspect ByteString IO and other optimizations may have changed that now. -k -- If I haven't seen further, it is by standing in the footprints of giants ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re[2]: [Haskell-cafe] Re: Why can't Haskell be faster?
Hello Lennart, Thursday, November 1, 2007, 2:45:49 AM, you wrote: But yeah, a code generator at run time is a very cool idea, and one that has been studied, but not enough. vm-based languages (java, c#) has runtimes that compile bytecode to the native code at runtime -- Best regards, Bulatmailto:[EMAIL PROTECTED] ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
RE: [Haskell-cafe] Re: Why can't Haskell be faster?
Yes, that's right. We'll be doing a lot more work on the code generator in the rest of this year and 2008. Here we includes Norman Ramsey and John Dias, as well as past interns Michael Adams and Ben Lippmeier, so we have real muscle! Simon | I don't think the register allocater is being rewritten so much as it is | being written: | | From talking to Ben, who rewrote the register allocator over the | summer, he said that the new graph based register allocator is pretty | good. The thing that is holding it back is the CPS conversion bit, | which was also being rewritten over the summer, but didn't get | finished. I think these are both things which are likely to be done | for 6.10. | | Thanks | | Neil | ___ | Haskell-Cafe mailing list | Haskell-Cafe@haskell.org | http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why can't Haskell be faster?
I assume the reason the switched away from LOC is to prevent programmers artificially reducing their LOC count, e.g. by using a = 5; b = 6; rather than a = 5; b = 6; in languages where newlines aren't syntactically significant. When gzipped, I guess that the ;\n string will be represented about as efficiently as just the single semi-colon. On 01/11/2007, Ketil Malde [EMAIL PROTECTED] wrote: Don Stewart [EMAIL PROTECTED] writes: goalieca: So in a few years time when GHC has matured we can expect performance to be on par with current Clean? So Clean is a good approximation to peak performance? If I remember the numbers, Clean is pretty close to C for most benchmarks, so I guess it is fair to say it is a good approximation to practical peak performance. Which proves that it is possible to write efficient low-level code in Clean. And remember usually Haskell is competing against 'high level' languages like python for adoption, where we're 5-500x faster anyway... Unfortunately, they replaced line counts with bytes of gzip'ed code -- while the former certainly has its problems, I simply cannot imagine what relevance the latter has (beyond hiding extreme amounts of repetitive boilerplate in certain languages). When we compete against Python and its ilk, we do so for programmer productivity first, and performance second. LOC was a nice measure, and encouraged terser and more idiomatic programs than the current crop of performance-tweaked low-level stuff. BTW, Python isn't so bad, performance wise. Much of what I do consists of reading some files, building up some hashes (associative arrays or finite maps, depending on where you come from :-), and generating some output. Python used to do pretty well here compared to Haskell, with rather efficient hashes and text parsing, although I suspect ByteString IO and other optimizations may have changed that now. -k -- If I haven't seen further, it is by standing in the footprints of giants ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why can't Haskell be faster?
Bernie wrote: I discussed this with Rinus Plasmeijer (chief designer of Clean) a couple of years ago, and if I remember correctly, he said that the native code generator in Clean was very good, and a significant reason why Clean produces (relatively) fast executables. I think he said that they had an assembly programming guru on their team. (Apologies to Rinus if I am mis-remembering the conversation). That guru would be John van Groningen... If I understood correctly, and I think I did, John is now working on a Haskell front end for the Clean compiler---which is actually quite interesting in the light of the present discussion. Cheers, Stefan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why can't Haskell be faster?
Neil wrote: The Clean and Haskell languages both reduce to pretty much the same Core language, with pretty much the same type system, once you get down to it - so I don't think the difference between the performance is a language thing, but it is a compiler thing. The uniqueness type stuff may give Clean a slight benefit, but I'm not sure how much they use that in their analyses. From what I know from the Nijmegen team, having the uniqueness information available and actually using it for code generation does allow for an impressive speed-up. The thing is: in principle, there is, I think, no reason why we can't do the same thing for Haskell. Of course, the Clean languages exposes uniqueness types at its surface level, but that is in no way essential to the underlying analysis. Exposing uniqueness types is, in that sense, just an alternative to monadic encapsulation of side effects. So, a Haskell compiler could just implement a uniqueness analysis under the hood and use the results for generating code that does in-place updates that are guaranteed to maintain referential transparency. Interestingly---but now I'm shamelessly plugging a paper of Jurriaan Hage, Arie Middelkoop, and myself, presented at this year's ICFP [*]---such an analysis is very similar to sharing analysis, which may be used by compilers for lazy languages to avoid unnecessary thunk updates. Cheers, Stefan [*] Jurriaan Hage, Stefan Holdermans, and Arie Middelkoop. A generic usage analysis with subeffect qualifiers. In Ralf Hinze and Norman Ramsey, editors, Proceedings of the 12th ACM SIGPLAN International Conference on Functional Programming, ICFP 2007, Freiburg, Germany, October 1–-3, pages 235–-246. ACM Press, 2007. http://doi.acm.org/ 10.1145/1291151.1291189. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why can't Haskell be faster?
On 01/11/2007, Simon Peyton-Jones [EMAIL PROTECTED] wrote: Yes, that's right. We'll be doing a lot more work on the code generator in the rest of this year and 2008. Here we includes Norman Ramsey and John Dias, as well as past interns Michael Adams and Ben Lippmeier, so we have real muscle! That's very good to know. I wonder where could I read more about current state of the art on Haskell compilation techniques and about the implementation of ghc in general? Is there a book on it or maybe some group of papers that would aid me to understand it? Cheers, Paulo Matos Simon | I don't think the register allocater is being rewritten so much as it is | being written: | | From talking to Ben, who rewrote the register allocator over the | summer, he said that the new graph based register allocator is pretty | good. The thing that is holding it back is the CPS conversion bit, | which was also being rewritten over the summer, but didn't get | finished. I think these are both things which are likely to be done | for 6.10. | | Thanks | | Neil | ___ | Haskell-Cafe mailing list | Haskell-Cafe@haskell.org | http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe -- Paulo Jorge Matos - pocm at soton.ac.uk http://www.personal.soton.ac.uk/pocm PhD Student @ ECS University of Southampton, UK ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
RE: [Haskell-cafe] Re: Why can't Haskell be faster?
http://hackage.haskell.org/trac/ghc/wiki/Commentary | -Original Message- | From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Paulo J. Matos | Sent: 01 November 2007 13:42 | To: Simon Peyton-Jones | Cc: Neil Mitchell; Stefan O'Rear; [EMAIL PROTECTED]; haskell-cafe@haskell.org | Subject: Re: [Haskell-cafe] Re: Why can't Haskell be faster? | | On 01/11/2007, Simon Peyton-Jones [EMAIL PROTECTED] wrote: | Yes, that's right. We'll be doing a lot more work on the code generator in the rest of this year and 2008. | Here we includes Norman Ramsey and John Dias, as well as past interns Michael Adams and Ben Lippmeier, so we | have real muscle! | | | That's very good to know. I wonder where could I read more about | current state of the art on Haskell compilation techniques and about | the implementation of ghc in general? | Is there a book on it or maybe some group of papers that would aid me | to understand it? | | Cheers, | | Paulo Matos | | Simon | | | I don't think the register allocater is being rewritten so much as it is | | being written: | | | | From talking to Ben, who rewrote the register allocator over the | | summer, he said that the new graph based register allocator is pretty | | good. The thing that is holding it back is the CPS conversion bit, | | which was also being rewritten over the summer, but didn't get | | finished. I think these are both things which are likely to be done | | for 6.10. | | | | Thanks | | | | Neil | | ___ | | Haskell-Cafe mailing list | | Haskell-Cafe@haskell.org | | http://www.haskell.org/mailman/listinfo/haskell-cafe | ___ | Haskell-Cafe mailing list | Haskell-Cafe@haskell.org | http://www.haskell.org/mailman/listinfo/haskell-cafe | | | | | | -- | Paulo Jorge Matos - pocm at soton.ac.uk | http://www.personal.soton.ac.uk/pocm | PhD Student @ ECS | University of Southampton, UK ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why can't Haskell be faster?
Ketil Malde wrote: Python used to do pretty well here compared to Haskell, with rather efficient hashes and text parsing, although I suspect ByteString IO and other optimizations may have changed that now. It still does just fine. For typical munge a file with regexps, lists, and maps tasks, Python and Perl remain on par with comparably written Haskell. This because the scripting-level code acts as a thin layer of glue around I/O, regexps, lists, and dicts, all of which are written in native code. The Haskell regexp libraries actually give us something of a leg down with respect to Python and Perl. The aggressive use of polymorphism in the return type of (=~) makes it hard to remember which of the possible return types gives me what information. Not only did I write a regexp tutorial to understand the API in the first place, I have to reread it every time I want to match a regexp. A suitable solution would be a return type of RegexpMatch a = Maybe a (to live alongside the existing types, but aiming to become the one that's easy to remember), with appropriate methods on a, but I don't have time to write up a patch. b ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why can't Haskell be faster?
On 10/31/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: I didn't keep a copy, but if someone wants to retrieve it from the Google cache and put it on the new wiki (under the new licence, of course), please do so. Cheers, Andrew Bromage Done: http://www.haskell.org/haskellwiki/RuntimeCompilation . Please update it as needed. Justin ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why can't Haskell be faster?
Unfortunately, they replaced line counts with bytes of gzip'ed code -- while the former certainly has its problems, I simply cannot imagine what relevance the latter has (beyond hiding extreme amounts of repetitive boilerplate in certain languages). Sounds pretty fair to me. Programming is a job of compressing a solution set. Excessive boilerplate might mean that you have to type a lot, but doesn't necessarily mean that you have to think a lot. I think the previous line count was skewed in favor of very terse languages like haskell, especially languages that let you put many ideas onto a single line. At the very least there should be a constant factor applied when comparing haskell line counts to python line counts, for example. (python has very strict rules about putting multiple things on the same line). Obviously no simple measure is going to satisfy everyone, but I think the gzip measure is more even handed across a range of languages. It probably more closely aproximates the amount of mental effort and hence time it requires to construct a program (ie. I can whip out a lot of lines of code in python very quickly, but it takes a lot more of them to do the same work as a single, dense, line of haskell code). When we compete against Python and its ilk, we do so for programmer productivity first, and performance second. LOC was a nice measure, and encouraged terser and more idiomatic programs than the current crop of performance-tweaked low-level stuff. The haskell entries to the shootout are very obviously written for speed and not elegance. If you want to do better on the LoC measure, you can definitely do so (at the expense of speed). -k Tim Newsham http://www.thenewsh.com/~newsham/ ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why can't Haskell be faster?
On 01/11/2007, Tim Newsham [EMAIL PROTECTED] wrote: Unfortunately, they replaced line counts with bytes of gzip'ed code -- while the former certainly has its problems, I simply cannot imagine what relevance the latter has (beyond hiding extreme amounts of repetitive boilerplate in certain languages). Sounds pretty fair to me. Programming is a job of compressing a solution set. Excessive boilerplate might mean that you have to type a lot, but doesn't necessarily mean that you have to think a lot. I think the previous line count was skewed in favor of very terse languages like haskell, especially languages that let you put many ideas onto a single line. At the very least there should be a constant factor applied when comparing haskell line counts to python line counts, for example. (python has very strict rules about putting multiple things on the same line). Obviously no simple measure is going to satisfy everyone, but I think the gzip measure is more even handed across a range of languages. It probably more closely aproximates the amount of mental effort and hence time it requires to construct a program (ie. I can whip out a lot of lines of code in python very quickly, but it takes a lot more of them to do the same work as a single, dense, line of haskell code). When we compete against Python and its ilk, we do so for programmer productivity first, and performance second. LOC was a nice measure, and encouraged terser and more idiomatic programs than the current crop of performance-tweaked low-level stuff. The haskell entries to the shootout are very obviously written for speed and not elegance. If you want to do better on the LoC measure, you can definitely do so (at the expense of speed). Personally I think syntactic noise is highly distracting, and semantic noise is even worse! Gzip'd files don't show you that one language will require you to do 90% book-keeping for 10% algorithm, while the other lets you get on with the job, it may make it look as if both languages are roughly equally good at letting the programmer focus on the important bits. I'm not sure what metric to use, but actively disgusing noisy languages using compression sure doesn't seem like anywhere close to the ideal. Token count would be good, but then we'd need a parser for each language, which is quite a bit of work to do... -- Sebastian Sylvan +44(0)7857-300802 UIN: 44640862 ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: Re[2]: [Haskell-cafe] Re: Why can't Haskell be faster?
Yes, of course. But they don't do partial evaluation. On 11/1/07, Bulat Ziganshin [EMAIL PROTECTED] wrote: Hello Lennart, Thursday, November 1, 2007, 2:45:49 AM, you wrote: But yeah, a code generator at run time is a very cool idea, and one that has been studied, but not enough. vm-based languages (java, c#) has runtimes that compile bytecode to the native code at runtime -- Best regards, Bulatmailto:[EMAIL PROTECTED] ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why can't Haskell be faster?
Quoting Justin Bailey [EMAIL PROTECTED]: Done: http://www.haskell.org/haskellwiki/RuntimeCompilation . Please update it as needed. Thanks! Cheers, Andrew Bromage ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Why can't Haskell be faster?
I'm curious what experts think too. So far I just guess it is because of clean type system getting better hints for optimizations: * it is easy to mark stuff strict (even in function signatures etc), so it is possible to save on unnecessary CAF creations * uniqueness types allow to do in-place modifications (instead of creating a copy of an object on heap and modifying the copy), so you save GC time and also improve cache hit performance Peter. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Why can't Haskell be faster?
Add to that better unbox / box annotations, this may make even bigger difference than the strictness stuff because it allows you to avoid a lot of indirect references do data. Anyway, if Haskell would do some kind of whole program analyzes and transformations it probably can mitigate all the problems to a certain degree. So the slowness of Haskell (compared to Clean) is consequence of its type system. OK, I'll stop, I did not write Clean nor Haskell optimizers or stuff like that :-D Peter. Peter Hercek wrote: I'm curious what experts think too. So far I just guess it is because of clean type system getting better hints for optimizations: * it is easy to mark stuff strict (even in function signatures etc), so it is possible to save on unnecessary CAF creations * uniqueness types allow to do in-place modifications (instead of creating a copy of an object on heap and modifying the copy), so you save GC time and also improve cache hit performance Peter. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why can't Haskell be faster?
On 31/10/2007, Peter Hercek [EMAIL PROTECTED] wrote: Anyway, if Haskell would do some kind of whole program analyzes and transformations it probably can mitigate all the problems to a certain degree. I think JHC is supposed to do whole-program optimisations. Rumour has it that its Hello World examples are the fastest around - I have heard it has problems with larger code bases though. ;-) What's the current state of play on this? D. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why can't Haskell be faster?
On 31/10/2007, Peter Hercek [EMAIL PROTECTED] wrote: Add to that better unbox / box annotations, this may make even bigger difference than the strictness stuff because it allows you to avoid a lot of indirect references do data. Anyway, if Haskell would do some kind of whole program analyzes and transformations it probably can mitigate all the problems to a certain degree. So, I might assert that it is not a problem of the Haskell language itself, it is a problem with the compiler. Which means that with enough effort it would be possible for the compiler to generate compiled code with performance as good as Clean. So the slowness of Haskell (compared to Clean) is consequence of its type system. OK, I'll stop, I did not write Clean nor Haskell optimizers or stuff like that :-D type system? Why is that? Shouldn't type system in fact speed up the generated code, since it will know all types at compile time? Peter. Peter Hercek wrote: I'm curious what experts think too. So far I just guess it is because of clean type system getting better hints for optimizations: * it is easy to mark stuff strict (even in function signatures etc), so it is possible to save on unnecessary CAF creations * uniqueness types allow to do in-place modifications (instead of creating a copy of an object on heap and modifying the copy), so you save GC time and also improve cache hit performance Peter. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe -- Paulo Jorge Matos - pocm at soton.ac.uk http://www.personal.soton.ac.uk/pocm PhD Student @ ECS University of Southampton, UK ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why can't Haskell be faster?
Paulo J. Matos wrote: So the slowness of Haskell (compared to Clean) is consequence of its type system. OK, I'll stop, I did not write Clean nor Haskell optimizers or stuff like that :-D type system? Why is that? Shouldn't type system in fact speed up the generated code, since it will know all types at compile time? Yes, but apparently the Clean type system gives more information to the compiler than the Haskell system does. The Haskell type system doesn't say that a certain value can be updated in-place or that a certain value should not be boxed (not counting the GHC extension for unboxed types). Reinier ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why can't Haskell be faster?
Paulo J. Matos wrote: type system? Why is that? Shouldn't type system in fact speed up the generated code, since it will know all types at compile time? The *existence* of a type system is helpful to the compiler. Peter was referring to the differences between haskell and clean. Specifically, clean's uniqueness types allow for a certain kind of zero-copy mutation optimisation which is much harder for a haskell compiler to automatically infer. It's not clear to me that it's actually worth it, but I think that's the point at issue. I can *imagine* algorithms in which copying is actually faster than mutation, if copying gives you better locality. Jules ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why can't Haskell be faster?
On Wed, 31 Oct 2007 14:17:13 + Jules Bean [EMAIL PROTECTED] wrote: Specifically, clean's uniqueness types allow for a certain kind of zero-copy mutation optimisation which is much harder for a haskell compiler to automatically infer. It's not clear to me that it's actually worth it, but I think that's the point at issue. I can *imagine* algorithms in which copying is actually faster than mutation, if copying gives you better locality. If you want in-place update in Haskell, you can use the ST monad, or IORefs. Yes, you have to refactor code, but anecdotally, uniqueness types aren't without problems either - you can make one small change and your code no longer satisfies the uniqueness condition. -- Robin ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why can't Haskell be faster?
Robin Green wrote: On Wed, 31 Oct 2007 14:17:13 + Jules Bean [EMAIL PROTECTED] wrote: Specifically, clean's uniqueness types allow for a certain kind of zero-copy mutation optimisation which is much harder for a haskell compiler to automatically infer. It's not clear to me that it's actually worth it, but I think that's the point at issue. I can *imagine* algorithms in which copying is actually faster than mutation, if copying gives you better locality. If you want in-place update in Haskell, you can use the ST monad, or IORefs. Yes, you have to refactor code, but anecdotally, uniqueness types aren't without problems either - you can make one small change and your code no longer satisfies the uniqueness condition. IORefs don't give you in-place update. They give you mutation, but new values are still allocated in new heap. foo - newIORef hi writeIORef foo bye -- bye is a new string, allocated in new heap. the only thing that got -- mutated was a pointer. STArrays and certain IO Arrays give you in-place update, though. Jules ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Why can't Haskell be faster?
Peter Hercek wrote: * it is easy to mark stuff strict (even in function signatures etc), so it is possible to save on unnecessary CAF creations Also, the Clean compiler has a strictness analyzer. The compiler will analyze code and find many (but not all) cases where a function argument can be made strict without changing the behavior of the program. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why can't Haskell be faster?
Hi I've been working on optimising Haskell for a little while (http://www-users.cs.york.ac.uk/~ndm/supero/), so here are my thoughts on this. The Clean and Haskell languages both reduce to pretty much the same Core language, with pretty much the same type system, once you get down to it - so I don't think the difference between the performance is a language thing, but it is a compiler thing. The uniqueness type stuff may give Clean a slight benefit, but I'm not sure how much they use that in their analyses. Both Clean and GHC do strictness analysis - I don't know which one does better, but both do quite well. I think Clean has some generalised fusion framework, while GHC relies on rules and short-cut deforestation. GHC goes through C-- to C or ASM, while Clean has been generating native code for a lot longer. GHC is based on the STG machine, while Clean is based on the ABC machine - not sure which is better, but there are differences there. My guess is that the native code generator in Clean beats GHC, which wouldn't be too surprising as GHC is currently rewriting its CPS and Register Allocator to produce better native code. Thanks Neil On 10/31/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: Peter Hercek wrote: * it is easy to mark stuff strict (even in function signatures etc), so it is possible to save on unnecessary CAF creations Also, the Clean compiler has a strictness analyzer. The compiler will analyze code and find many (but not all) cases where a function argument can be made strict without changing the behavior of the program. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why can't Haskell be faster?
ndmitchell: Hi I've been working on optimising Haskell for a little while (http://www-users.cs.york.ac.uk/~ndm/supero/), so here are my thoughts on this. The Clean and Haskell languages both reduce to pretty much the same Core language, with pretty much the same type system, once you get down to it - so I don't think the difference between the performance is a language thing, but it is a compiler thing. The uniqueness type stuff may give Clean a slight benefit, but I'm not sure how much they use that in their analyses. Both Clean and GHC do strictness analysis - I don't know which one does better, but both do quite well. I think Clean has some generalised fusion framework, while GHC relies on rules and short-cut deforestation. GHC goes through C-- to C or ASM, while Clean has been generating native code for a lot longer. GHC is based on the STG machine, while Clean is based on the ABC machine - not sure which is better, but there are differences there. My guess is that the native code generator in Clean beats GHC, which wouldn't be too surprising as GHC is currently rewriting its CPS and Register Allocator to produce better native code. Yes, this was my analysis too -- its in the native code gen. Which is perhaps the main GHC bottleneck now. -- Don ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why can't Haskell be faster?
So in a few years time when GHC has matured we can expect performance to be on par with current Clean? So Clean is a good approximation to peak performance? --ryan On 10/31/07, Don Stewart [EMAIL PROTECTED] wrote: ndmitchell: Hi I've been working on optimising Haskell for a little while (http://www-users.cs.york.ac.uk/~ndm/supero/), so here are my thoughts on this. The Clean and Haskell languages both reduce to pretty much the same Core language, with pretty much the same type system, once you get down to it - so I don't think the difference between the performance is a language thing, but it is a compiler thing. The uniqueness type stuff may give Clean a slight benefit, but I'm not sure how much they use that in their analyses. Both Clean and GHC do strictness analysis - I don't know which one does better, but both do quite well. I think Clean has some generalised fusion framework, while GHC relies on rules and short-cut deforestation. GHC goes through C-- to C or ASM, while Clean has been generating native code for a lot longer. GHC is based on the STG machine, while Clean is based on the ABC machine - not sure which is better, but there are differences there. My guess is that the native code generator in Clean beats GHC, which wouldn't be too surprising as GHC is currently rewriting its CPS and Register Allocator to produce better native code. Yes, this was my analysis too -- its in the native code gen. Which is perhaps the main GHC bottleneck now. -- Don ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why can't Haskell be faster?
goalieca: So in a few years time when GHC has matured we can expect performance to be on par with current Clean? So Clean is a good approximation to peak performance? The current Clean compiler, for micro benchmarks, seems to be rather good, yes. Any slowdown wrt. the same program in Clean could be considered a bug in GHC... And remember usually Haskell is competing against 'high level' languages like python for adoption, where we're 5-500x faster anyway... -- Don ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why can't Haskell be faster?
On 31/10/2007, Don Stewart [EMAIL PROTECTED] wrote: goalieca: So in a few years time when GHC has matured we can expect performance to be on par with current Clean? So Clean is a good approximation to peak performance? The current Clean compiler, for micro benchmarks, seems to be rather good, yes. Any slowdown wrt. the same program in Clean could be considered a bug in GHC... And remember usually Haskell is competing against 'high level' languages like python for adoption, where we're 5-500x faster anyway... Not so sure about that last thing. I'd love to use Haskell for performance, in other words use it because it makes it easier to write parallel and concurrent programs (NDP and STM mainly, though I wouldn't mind some language support for message passing, and perhaps Sing#-style static protocol specifications, with some high degree of inference). Anyway, in order for that to be reasonable I think it's important that even the sequential code (where actual data dependencies enforce evaluation sequence) runs very quickly, otherwise we'll lose out to some C-based language (written with 10x the effort) again when we start bumping into the wall of Almdahls law... -- Sebastian Sylvan +44(0)7857-300802 UIN: 44640862 ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why can't Haskell be faster?
Hi So in a few years time when GHC has matured we can expect performance to be on par with current Clean? So Clean is a good approximation to peak performance? No. The performance of many real world programs could be twice as fast at least, I'm relatively sure. Clean is a good short term target, but in the long run Haskell should be aiming for equivalence with highly optimised C. Thanks Neil ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why can't Haskell be faster?
On 10/31/07, Neil Mitchell [EMAIL PROTECTED] wrote: in the long run Haskell should be aiming for equivalence with highly optimised C. Really, that's not very ambitious. Haskell should be setting its sights higher. :-) When I first started reading about Haskell I misunderstood what currying was all about. I thought that if you provided one argument to a two argument function, say, then it'd do partial evaluation. Very I soon I was sorely let down as I discovered that it simply made a closure that waits for the second argument to arrive so the reduction can be carried out. But every day, while coding at work (in C++), I see situations where true partial evaluation would give a big performance payoff, and yet there are so few languages that natively support it. Of course it would require part of the compiler to be present in the runtime. But by generating code in inner loops specialised to the data at hand it could easily outperform C code in a wide variety of real world code. I know there has been some research in this area, and some commercial C++ products for partial evaluation have appeared, so I'd love to see it in an easy to use Haskell form one day. Just dreaming, I know... -- Dan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why can't Haskell be faster?
On Wed, 31 Oct 2007, Dan Piponi wrote: But every day, while coding at work (in C++), I see situations where true partial evaluation would give a big performance payoff, and yet there are so few languages that natively support it. Of course it would require part of the compiler to be present in the runtime. But by generating code in inner loops specialised to the data at hand it could easily outperform C code in a wide variety of real world code. I know there has been some research in this area, and some commercial C++ products for partial evaluation have appeared, so I'd love to see it in an easy to use Haskell form one day. I weakly remember an article on Hawiki about that ... If you write foo :: X - Y - Z foo x = let bar y = ... x ... y ... in bar would this give you true partial evaluation? ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why can't Haskell be faster?
There are many ways to implement currying. And even with GHC you can get it to do some work given one argument if you write the function the right way. I've used this in some code where it was crucial. But yeah, a code generator at run time is a very cool idea, and one that has been studied, but not enough. -- Lennart On 10/31/07, Dan Piponi [EMAIL PROTECTED] wrote: On 10/31/07, Neil Mitchell [EMAIL PROTECTED] wrote: in the long run Haskell should be aiming for equivalence with highly optimised C. Really, that's not very ambitious. Haskell should be setting its sights higher. :-) When I first started reading about Haskell I misunderstood what currying was all about. I thought that if you provided one argument to a two argument function, say, then it'd do partial evaluation. Very I soon I was sorely let down as I discovered that it simply made a closure that waits for the second argument to arrive so the reduction can be carried out. But every day, while coding at work (in C++), I see situations where true partial evaluation would give a big performance payoff, and yet there are so few languages that natively support it. Of course it would require part of the compiler to be present in the runtime. But by generating code in inner loops specialised to the data at hand it could easily outperform C code in a wide variety of real world code. I know there has been some research in this area, and some commercial C++ products for partial evaluation have appeared, so I'd love to see it in an easy to use Haskell form one day. Just dreaming, I know... -- Dan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Why can't Haskell be faster?
I'd like to see Supero and Jhc - compiled examples in the language shootout. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Why can't Haskell be faster?
The site claims it is quite up to date: about Haskell GHC The Glorious Glasgow Haskell Compilation System, version 6.6 Examples are compiled mostly in the middle of this year and at least -O was used. Each test has a log available. They are good at documenting what they do. Peter. Peter Verswyvelen wrote: Are these benchmarks still up-to-date? When I started learning FP, I had to choose between Haskell and Clean, so I made a couple of little programs in both. GHC 6.6.1 with -O was faster in most cases, sometimes a lot faster... I don't have the source code anymore, but it was based on the book The Haskell road to math logic. However, the Clean compiler itself is really fast, which is nice, it reminds me to the feeling I had with Turbo Pascal under DOS :-) I find GHC rather slow in compilation. But that is another topic of course. Peter ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why can't Haskell be faster?
On Wed, 2007-10-31 at 23:44 +0100, Henning Thielemann wrote: On Wed, 31 Oct 2007, Dan Piponi wrote: But every day, while coding at work (in C++), I see situations where true partial evaluation would give a big performance payoff, and yet there are so few languages that natively support it. Of course it would require part of the compiler to be present in the runtime. But by generating code in inner loops specialised to the data at hand it could easily outperform C code in a wide variety of real world code. I know there has been some research in this area, and some commercial C++ products for partial evaluation have appeared, so I'd love to see it in an easy to use Haskell form one day. I weakly remember an article on Hawiki about that ... Probably RuntimeCompilation (or something like that and linked from the Knuth-Morris-Pratt implementation on HaWiki) written by Andrew Bromage. If you write foo :: X - Y - Z foo x = let bar y = ... x ... y ... in bar would this give you true partial evaluation? No. Partial evaluation (usually) implies a heck of a lot more than what you are trying to do. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why can't Haskell be faster?
On Wed, Oct 31, 2007 at 03:37:12PM +, Neil Mitchell wrote: Hi I've been working on optimising Haskell for a little while (http://www-users.cs.york.ac.uk/~ndm/supero/), so here are my thoughts on this. The Clean and Haskell languages both reduce to pretty much the same Core language, with pretty much the same type system, once you get down to it - so I don't think the difference between the performance is a language thing, but it is a compiler thing. The uniqueness type stuff may give Clean a slight benefit, but I'm not sure how much they use that in their analyses. Both Clean and GHC do strictness analysis - I don't know which one does better, but both do quite well. I think Clean has some generalised fusion framework, while GHC relies on rules and short-cut deforestation. GHC goes through C-- to C or ASM, while Clean has been generating native code for a lot longer. GHC is based on the STG machine, while Clean is based on the ABC machine - not sure which is better, but there are differences there. My guess is that the native code generator in Clean beats GHC, which wouldn't be too surprising as GHC is currently rewriting its CPS and Register Allocator to produce better native code. I don't think the register allocater is being rewritten so much as it is being written: [EMAIL PROTECTED]:/tmp$ cat X.hs module X where import Foreign import Data.Int memset :: Ptr Int32 - Int32 - Int - IO () memset p v i = p `seq` v `seq` case i of 0 - return () _ - poke p v memset (p `plusPtr` sizeOf v) v (i - 1) [EMAIL PROTECTED]:/tmp$ ghc -fbang-patterns -O2 -c -fforce-recomp -ddump-asm X.hs ... X_zdwa_info: movl 8(%ebp),%eax testl %eax,%eax jne .LcH6 movl $base_GHCziBase_Z0T_closure+1,%esi addl $12,%ebp jmp *(%ebp) .LcH6: movl 4(%ebp),%ecx movl (%ebp),%edx movl %ecx,(%edx) movl (%ebp),%ecx addl $4,%ecx decl %eax movl %eax,8(%ebp) movl %ecx,(%ebp) jmp X_zdwa_info ... Admittedly that's better than it used to be (I recall 13 memory references last time I tested it), but still... the reason for your performance woes should be quite obvious in that snippet. Stefan signature.asc Description: Digital signature ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why can't Haskell be faster?
Hi I don't think the register allocater is being rewritten so much as it is being written: From talking to Ben, who rewrote the register allocator over the summer, he said that the new graph based register allocator is pretty good. The thing that is holding it back is the CPS conversion bit, which was also being rewritten over the summer, but didn't get finished. I think these are both things which are likely to be done for 6.10. Thanks Neil ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why can't Haskell be faster?
On Thu, Nov 01, 2007 at 02:30:17AM +, Neil Mitchell wrote: Hi I don't think the register allocater is being rewritten so much as it is being written: From talking to Ben, who rewrote the register allocator over the summer, he said that the new graph based register allocator is pretty good. The thing that is holding it back is the CPS conversion bit, which was also being rewritten over the summer, but didn't get finished. I think these are both things which are likely to be done for 6.10. Oh, that's good news. I look forward to a massive increase in the performance of GHC-compiled programs, most specifically GHC itself. Stefan signature.asc Description: Digital signature ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why can't Haskell be faster?
On 01/11/2007, at 2:37 AM, Neil Mitchell wrote: My guess is that the native code generator in Clean beats GHC, which wouldn't be too surprising as GHC is currently rewriting its CPS and Register Allocator to produce better native code. I discussed this with Rinus Plasmeijer (chief designer of Clean) a couple of years ago, and if I remember correctly, he said that the native code generator in Clean was very good, and a significant reason why Clean produces (relatively) fast executables. I think he said that they had an assembly programming guru on their team. (Apologies to Rinus if I am mis-remembering the conversation). At the time I was impressed by how fast Clean could recompile itself. Cheers, Bernie. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why can't Haskell be faster?
G'day all. Quoting Derek Elkins [EMAIL PROTECTED]: Probably RuntimeCompilation (or something like that and linked from the Knuth-Morris-Pratt implementation on HaWiki) written by Andrew Bromage. I didn't keep a copy, but if someone wants to retrieve it from the Google cache and put it on the new wiki (under the new licence, of course), please do so. Cheers, Andrew Bromage ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Why is Haskell not homoiconic?
Homiconic means that the primary representation of programs is also a data structure in a primitive type of the language itself The main reason is that Haskell is designed as a compiled language, so the source of the programme can safely disappear at runtime. So there's no need to have a representation of it beyond the source code. I'm not sure it's relevant. In syntactically scoped Lisps, the code is mostly manipulated at compile-time by macros, rather than at run-time. And indeed, Template Haskell makes Haskell pretty much homoiconic. Stefan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Why is Haskell not homoiconic?
Henning Sato von Rosen [EMAIL PROTECTED] writes: Hi all! I am curious as to why Haskell not is homoiconic? It very nearly is. The icon for Haskell is a lower-case lambda, but the logo for these folk http://www.ualberta.ca/~cbidwell/cmb/lambda.htm is an upper-case lambda. Homiconic means that the primary representation of programs is also a data structure in a primitive type of the language itself Oh, dear, that renders my remark above irrelevant ;-0 The main reason is that Haskell is designed as a compiled language, so the source of the programme can safely disappear at runtime. So there's no need to have a representation of it beyond the source code. -- Jón Fairbairn [EMAIL PROTECTED] ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Why does Haskell have the if-then-else syntax?
Mike Gunter wrote: I had hoped the History of Haskell paper would answer a question I've pondered for some time: why does Haskell have the if-then-else syntax? The paper doesn't address this. What's the story? For what it's worth, I have been asking myself the same question several times. If/then/else syntax could be replaced by a regular (lazy) function without any noticeable loss. Almost every time I use if/then/else I end up changing it to a case expression on teh underlying data (which is almost never Bool); the only exception being simple one liners, and for those a function would be even more concise. IMHO, the next standardized version of Haskell, however named, should abandon the special if/then/else syntax so we'll have at least /one/ item where the language becomes smaller and simpler. Remember: Perfection is reached not when there is nothing more to add, but rather when there is nothing more to take away. On another note, I remember reading a paper proposing to generalize if/then/else to arbitrary (so-called) dist-fix operators, using something like partial backquoting, as in `if condition `then` true_branch `else` false_branch fi` Can't remember the exact title of the paper, nor the details, but it was something to do with adding macros to Haskell. Cheers, Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why does Haskell have the if-then-else syntax?
G'day all. Quoting Benjamin Franksen [EMAIL PROTECTED]: For what it's worth, I have been asking myself the same question several times. If/then/else syntax could be replaced by a regular (lazy) function without any noticeable loss. I believe that if-then-else cannot be replaced by a regular function for the same reason that regular function application and ($) are not identical. The loss may not be noticeable, but it's still a loss. It could be replaced by a case-switch-on-Bool, though. IMHO, the next standardized version of Haskell, however named, should abandon the special if/then/else syntax so we'll have at least /one/ item where the language becomes smaller and simpler. The de facto Haskell philosophy, if you read the history paper, is to have a small core language with a lot of syntactic sugar. The syntactic sugar is specified by translation to the core language. The small core ensures that Haskell remains simple. If you discount changes in the type system, the Haskell core language is as simple now as it was in 1989. Remember: Perfection is reached not when there is nothing more to add, but rather when there is nothing more to take away. Perfection is asymptotically approached when arbitrary restrictions are removed and special cases are dumped in favour of general, theoretically sound, principles. Perfection will never be reached in a practical programming language, but it may be asymptotically approached. Cheers, Andrew Bromage ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why Not Haskell?
Reilly Hayes [EMAIL PROTECTED] writes: On Aug 8, 2006, at 1:42 AM, Immanuel Litzroth wrote: Reilly Hayes [EMAIL PROTECTED] writes: I don't understand your argument. How exactly does the GPL get in the way of selling software as an instantiation of business expertise? Are you saying that you have the business expertise but customers still prefer not to buy your software? Doesn't that just mean that your expertise isn't worth much (economic evaluation :-). Or that your idea that they were buying expertise was not correct, they were just buying the software after all, and now they have an alternative? I failed to communicate my case clearly. The software *is* what is being sold. The *reason* it is valuable is the business expertise required to build it. There are markets with very small populations of people who both understand the business thoroughly and can implement solutions. It makes software valuable and makes licensing the most effective way to monetize that value. I am not arguing that licensing would not be a very effective way to monetize value. Yes I know the business model. Sell them some overpriced software charge them through the nose for support, features, training, installation, updates Your resentment against the GPL stems from the fact that it makes squeezing the last buck out of your clients somewhat harder (in some markets). It probably annoys you that you are not dealing with a competitor who is making shitloads of money, making some price fixing or secret agreements not feasable. Your problem is that just as your business practice is not illegal, neither is the GPL. This paragraph is way out of line. You have taken a discussion of the merits of using GPL software and turned it into a personal attack. Attack the argument, not the arguer. It would be both polite and reasonable to tone down the hostility if you actually want a discussion. Yeah, it might have been harsh and I apologize. But I just describe what I have seen in some of the companies I worked for. I don't have a problem with the GPL. In my professional life, I am careful to avoid GPL software in those cases where the GPL would interfere with the firm's commercial interests. I certainly don't resent the GPL or those who choose to release software under the GPL. In fact, I can imagine wanting to release some kinds of software under the GPL. The point I was making was that the GPL *does* get in the way of *some* optimal mechanisms of making money. Which is *fine*. That is one of the *intents* of the GPL. The argument that I am trying to counter is the one that says open source is *always* better for everybody. I don't think the *intent* of the GPL is to get in the way of some optimal ways of making money. Can you tell me which part of the GPL makes you think? It might have that side-effect though. Sometimes, the best thing for the owner of the intellectual property is to keep it closed. There *are* markets where monetization of IP is a zero sum game, or worse (if the IP is public, nobody makes any money). I wonder who you see as the participants in this game? A worse than zero sum game might be interesting if you are one of the people who score positive and some of the other people have to pay for it. Gambling is a fine example. I'm not making (or getting involved in) the moral argument about free or open software. I will point out that the current good health of Haskell owes a great deal to Microsoft through the computer scientists they employ. I'm sure Haskell has benefitted from the largesse of other companies as well. That is definitely wrong. Haskell would be in even greater shape if some people who shall remain unnamed had not gone over to Microsoft. I foresee an interesting discussion here. I don't see how you can say Haskell would be better OR worse off if people hadn't gone to work for Microsoft. It's an entirely hypothetical case and it's just not knowable. My point is much simpler. Haskell GHC do benefit from the efforts of people being paid by Microsoft. Microsoft is planning to hire a full-time contractor to work on GHC. It seems irony gets lost so easily in these conversations. You have no way of knowing what the state of haskell would have been had certain key contributors to GHC and Haskell not taken jobs at Microsoft. Therefore you statement is meaningless and only good for producing approving nods among people who already agree with what you say. The snarky comment about people who shall remain unnamed is rude. I did not mean to be rude, and would like to apologize if anyone felt personally attacked by this. Immanuel -- *** I can, I can't. Tubbs Tattsyrup -- Immanuel
Re: [Haskell-cafe] Re: Why Not Haskell?
Reilly Hayes [EMAIL PROTECTED] writes: On Aug 7, 2006, at 10:00 AM, Stefan Monnier wrote: In any case, making a living by selling a program (as opposed to services around that program) is a difficult business. Making a living writing and selling programs for use by a wide audience is one thing. But there is a lot of money to be made by developers who really understand a complex niche market (assuming the niche is actually populated by customers who need and can pay for the product). And the GPL absolutely gets in the way of that. Because what you're really selling in that kind of market is software as an instantiation of business expertise. I don't understand your argument. How exactly does the GPL get in the way of selling software as an instantiation of business expertise? Are you saying that you have the business expertise but customers still prefer not to buy your software? Doesn't that just mean that your expertise isn't worth much (economic evaluation :-). Or that your idea that they were buying expertise was not correct, they were just buying the software after all, and now they have an alternative? Maybe you should thank the FSF for making you doubt: you should really think very hard about how you're going to make a living off of selling a program, even if that program hasn't been anywhere near any GPL'd code. In all likelihood it'll be much easier to earn your money by selling services around your program than just the program itself. Selling services is much easier if you can tie the services to IP that you own exclusively. It can also double your firm's daily rate on related services. And the economics of selling product (the program) can be MUCH better, assuming people want to use the program. If they don't, then you don't have a service business either. Yes I know the business model. Sell them some overpriced software charge them through the nose for support, features, training, installation, updates Your resentment against the GPL stems from the fact that it makes squeezing the last buck out of your clients somewhat harder (in some markets). It probably annoys you that you are not dealing with a competitor who is making shitloads of money, making some price fixing or secret agreements not feasable. Your problem is that just as your business practice is not illegal, neither is the GPL. I'm not making (or getting involved in) the moral argument about free or open software. I will point out that the current good health of Haskell owes a great deal to Microsoft through the computer scientists they employ. I'm sure Haskell has benefitted from the largesse of other companies as well. That is definitely wrong. Haskell would be in even greater shape if some people who shall remain unnamed had not gone over to Microsoft. I foresee an interesting discussion here. Immanuel -- *** I can, I can't. Tubbs Tattsyrup -- Immanuel Litzroth Software Development Engineer Enfocus Software Antwerpsesteenweg 41-45 9000 Gent Belgium Voice: +32 9 269 23 90 Fax : +32 9 269 16 91 Email: [EMAIL PROTECTED] web : www.enfocus.be *** ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why Not Haskell?
On Aug 8, 2006, at 1:42 AM, Immanuel Litzroth wrote:"Reilly Hayes" [EMAIL PROTECTED] writes: I don't understand your argument. How exactly does the GPL get in theway of selling software as an instantiation of business expertise?Are you saying that you have the business expertise but customersstill prefer not to buy your software? Doesn't that just mean thatyour expertise isn't worth much (economic evaluation :-). Or that youridea that they were buying expertise was not correct, they were justbuying the software after all, and now they have an alternative?I failed to communicate my case clearly. The software *is* what is being sold. The *reason* it is valuable is the business expertise required to build it. There are markets with very small populations of people who both understand the business thoroughly and can implement solutions. It makes software valuable and makes licensing the most effective way to monetize that value.Yes I know the business model. Sell them some overpriced softwarecharge them through the nose for support, features, training,installation, updates Your resentment against the GPL stems from the fact that it makessqueezing the last buck out of your clients somewhat harder (in somemarkets). It probably annoys you that you are not dealing with acompetitor who is making shitloads of money, making some price fixingor secret agreements not feasable. Your problem is that just as yourbusiness practice is not illegal, neither is the GPL.This paragraph is way out of line. You have taken a discussion of the merits of using GPL software and turned it into a personal attack. Attack the argument, not the arguer. It would be both polite and reasonable to tone down the hostility if you actually want a discussion.Certainly, some firms use restrictive software licensing to maximize short term revenue from their clients in the way you describe. But I was referring to the marketing value of having the IP. It's easier to sell services when you have some unique core IP, even to clients that aren't going to buy your product. It gives your credibility a boost.I don't have a problem with the GPL. In my professional life, I am careful to avoid GPL software in those cases where the GPL would interfere with the firm's commercial interests. I certainly don't resent the GPL or those who choose to release software under the GPL. In fact, I can imagine wanting to release some kinds of software under the GPL.The point I was making was that the GPL *does* get in the way of *some* optimal mechanisms of making money. Which is *fine*. That is one of the *intents* of the GPL. The argument that I am trying to counter is the one that says open source is *always* better for everybody. Sometimes, the best thing for the owner of the intellectual property is to keep it closed. There *are* markets where monetization of IP is a zero sum game, or worse (if the IP is public, nobody makes any money).I'm not making (or getting involved in) the moral argument about free or opensoftware. I will point out that the current good health of Haskell owes agreat deal to Microsoft through the computer scientists they employ. I'm sureHaskell has benefitted from the largesse of other companies as well. That is definitely wrong. Haskell would be in even greater shape ifsome people who shall remain unnamed had not gone over to Microsoft. Iforesee an interesting discussion here.I don't see how you can say Haskell would be better OR worse off if people hadn't gone to work for Microsoft. It's an entirely hypothetical case and it's just not knowable. My point is much simpler. Haskell GHC do benefit from the efforts of people being paid by Microsoft. Microsoft is planning to hire a full-time contractor to work on GHC.The snarky comment about "people who shall remain unnamed" is rude.-R Hayes ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Why Not Haskell?
Well I understand the free as in free speech not free beer motto, but suppose person A is talented at writing software but prefers a peaceful existence and lacks the contacts/refs/desire/energy etc to be a consultant or contractor, and has had the bad experience of being forced to work extremely long hours with low pay while in an employed position, and person B is outgoing, ebullient, and talented at marketing and advertising. Now person A spends some years quietly writing some code, which uses a GPL library and is therefore GPL'd, and sells it, as is his/her right under the GPL to person B. If person A really worked for years using a GPL'd library and hoping to make money selling the resulting program (rather than services around that program), he's a complete and total idiot. In any case, making a living by selling a program (as opposed to services around that program) is a difficult business. Except when it's a program written on-demand for a customer who pays you directly to write it (in which case the GPL probably won't get in way, BTW). I can't entirely dismiss GNU/FSF/GPL but it poses a fundamental conflict with the only way I can see of earning a living so it's like a continuous background problem which drains some of my energy and enthusiasm hence the length of my rambling post where I made another attempt to understand my relation to it. Maybe you should thank the FSF for making you doubt: you should really think very hard about how you're going to make a living off of selling a program, even if that program hasn't been anywhere near any GPL'd code. In all likelihood it'll be much easier to earn your money by selling services around your program than just the program itself. Stefan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Why Not Haskell? (sidenote on licensing)
Sorry, I didn't mean to offend anybody, or be misleading. I like GPL, but I also like the disease metaphor (although is not as much being sneezed at as having sex with somebody :-). Then you should think twice before using such metaphors: you end up propagating hate for something which you like. And it's really not as easy to control as you suggest: If you ever take in a single patch under the GPL, Any patch or outside piece of code you choose to include in your code should be checked to see if its licence allows you to use it like you intend. That's true for any license, not just for the GPL. And don't forget: the default license is no licence at all (i.e. basically just what the copyright's fair use says, which seems to be asymptotically moving towards the empty set as time goes). or even implement a new feature in an obvious way that has been implemented by somebody else under the GPL, you are in trouble. Doesn't sound credible. You're free to write and sell a program whose source code is exactly the same as Emacs's (or PowerPoint for that matter) as long as you can show it was pure accident (or if you like a more classic example url:http://en.wikipedia.org/wiki/Pierre_Menard_(fictional_character)) AFAIK The problem you talk about only comes with patents and is unrelated to copyright/licenses/GPL. Stefan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Why Not Haskell?
Stefan Monnier [EMAIL PROTECTED] writes: I can't entirely dismiss GNU/FSF/GPL but it poses a fundamental conflict with the only way I can see of earning a living so it's like a continuous background problem which drains some of my energy and enthusiasm hence the length of my rambling post where I made another attempt to understand my relation to it. Maybe you should thank the FSF for making you doubt: you should really think very hard about how you're going to make a living off of selling a program, even if that program hasn't been anywhere near any GPL'd code. In all likelihood it'll be much easier to earn your money by selling services around your program than just the program itself. To add to that from the point of view of a potential user: if there some programme that I'm going to rely on and its source is not free, I'll look elsewhere rather than rely on a single vendor that might disappear without a trace and leave me with no support. Conversely, if it has free source, but doesn't quite do what I'm relying on it to do, I'll happily pay someone to sort it out for me (assuming that I can't/don't want to/am to busy to do it myself and that I have any money). I know of several good ideas that started out as attempts at commercial projects but weren't taken up. The best that happened to them is that someone recoded the idea (or it was re-released) as free software. If that didn't happen, they disappeared without trace. Remember, keeping the code secret is no protection against someone rewriting the whole thing from scratch. If it's a big enough idea, you can be sure that some large commercial concern (and conceivably teams of amateurs) will do that unless you've patented something crucial... and keeping patents alive is an expensive business -- especially if there's a large concern on your case (we want to use your patented idea. Oh, it looks like your code uses one of our patented ideas; you'll be hearing from our lawyers). -- Jón Fairbairn [EMAIL PROTECTED] http://www.chaos.org.uk/~jf/Stuff-I-dont-want.html (updated 2006-07-14) ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why Not Haskell? (sidenote on licensing)
Stefan Monnier [EMAIL PROTECTED] writes: (snip) Doesn't sound credible. You're free to write and sell a program whose source code is exactly the same as Emacs's (or PowerPoint for that matter) as long as you can show it was pure accident (snip) It's kind of hard to be sure that you'll be able to show that, though, especially if the other code was available to you. -- Mark ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why Not Haskell?
Jón Fairbairn wrote: Stefan Monnier [EMAIL PROTECTED] writes: I can't entirely dismiss GNU/FSF/GPL but it poses a fundamental conflict with the only way I can see of earning a living so it's like a continuous background problem which drains some of my energy and enthusiasm hence the length of my rambling post where I made another attempt to understand my relation to it. Maybe you should thank the FSF for making you doubt: you should really think very hard about how you're going to make a living off of selling a program, even if that program hasn't been anywhere near any GPL'd code. In all likelihood it'll be much easier to earn your money by selling services around your program than just the program itself. To add to that from the point of view of a potential user: if there some programme that I'm going to rely on and its source is not free, I'll look elsewhere rather than rely on a single vendor that might disappear without a trace and leave me with no support. Conversely, if it has free source, but doesn't quite do what I'm relying on it to do, I'll happily pay someone to sort it out for me (assuming that I can't/don't want to/am to busy to do it myself and that I have any money). I know of several good ideas that started out as attempts at commercial projects but weren't taken up. The best that happened to them is that someone recoded the idea (or it was re-released) as free software. If that didn't happen, they disappeared without trace. Remember, keeping the code secret is no protection against someone rewriting the whole thing from scratch. If it's a big enough idea, you can be sure that some large commercial concern (and conceivably teams of amateurs) will do that unless you've patented something crucial... and keeping patents alive is an expensive business -- especially if there's a large concern on your case (we want to use your patented idea. Oh, it looks like your code uses one of our patented ideas; you'll be hearing from our lawyers). Thanks Jón and Stefan for these points. I'm coming round to the idea that possibly a combination of BSD (for libs) and a metamorphosing licence for the program (from proprietary up to a certain date then GPL thereafter) would solve these problems by removing incentives for anyone else to try and reverse engineer code before I'd had time to get an established user base, while keeping users happy (6 months is not that long to wait to get full control), and preventing anyone else getting a similar advantage after the 6 months had elapsed (if they used any of the non-BSD parts of the app (now available to them under GPL) they'd have to release their version as GPL). After the 6 months had elapsed, other companies could develop the code further, but they wouldn't be able to impose a similar metamorphosing license because the code they used (apart from the BSD components of course) would be covered by GPL. However *I* would still have the right to modify my code and repeat the metamorphic process because I wouldn't be bound by the metamorphic GPL license I sold to others (please correct me if I've got this wrong), so people could choose to pay a modest sum to me for the improved version, (which I'd have had a head start of the last 6 months to develop) or wait 6 months to get it from some other company, or spend several months hacking themselves starting from the original version... It gets even better because as long as I make sure that I only use BSD libs + my own code, I could always choose to release future versions with a proprietary license therefore the amortized consequence of the previous metamorphic GPL releases would be risk-free (those versions now being so far behind that they would be irrelevant) yet any other companies which had made improvements (as long as they were based on a version they received + all their own code (or BSD code)) could be a useful source of ideas (to reimplement) or collaboration. Anyway no doubt this is all getting a bit off topic but it's interesting that the different concepts provided by BSD and GPL can suggest possible models like the above. Regards, Brian. -- Logic empowers us and Love gives us purpose. Yet still phantoms restless for eras long past, congealed in the present in unthought forms, strive mightily unseen to destroy us. http://www.metamilk.com ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why Not Haskell?
On Aug 7, 2006, at 10:00 AM, Stefan Monnier wrote:In any case, making a living by selling a program (as opposed to servicesaround that program) is a difficult business. Making a living writing and selling programs for use by a wide audience is one thing. But there is a lot of money to be made by developers who really understand a complex niche market (assuming the niche is actually populated by customers who need and can pay for the product). And the GPL absolutely gets in the way of that. Because what you're really selling in that kind of market is software as an instantiation of business expertise. Maybe you should thank the FSF for making you doubt: you should really thinkvery hard about how you're going to make a living off of selling a program,even if that program hasn't been anywhere near any GPL'd code. In alllikelihood it'll be much easier to earn your money by selling servicesaround your program than just the program itself.Selling services is much easier if you can tie the services to IP that you own exclusively. It can also double your firm's daily rate on related services. And the economics of selling product (the program) can be MUCH better, assuming people want to use the program. If they don't, then you don't have a service business either.I'm not making (or getting involved in) the moral argument about free or open software. I will point out that the current good health of Haskell owes a great deal to Microsoft through the computer scientists they employ. I'm sure Haskell has benefitted from the largesse of other companies as well. Reilly ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why Not Haskell?
Brian Hulley wrote: Jón Fairbairn wrote: Stefan Monnier [EMAIL PROTECTED] writes: I can't entirely dismiss GNU/FSF/GPL... Maybe you should thank the FSF for making you doubt: I know of several good ideas that started out as attempts at commercial projects but weren't taken up. [...snip] Thanks Jón and Stefan for these points. I'm coming round to the idea that possibly a combination of BSD (for libs) and a metamorphosing licence for the program (from proprietary up to a certain date then GPL thereafter) would solve... Actually I've reconsidered that model and can't recommend it any more so please ignore it or treat it with some caution. (Making the end product open doesn't help (in terms of me making money) if most of the target user base isn't at all interested in hacking, and the cyclic metamorphic model doesn't admit the same advantages of collaboration that a purely open source model would and might just degenerate into a heavily forked mess...) Apologies for posting before I'd considered these implications - I'm out of this thread now (everyone will be very pleased to hear!) Regards, Brian. -- Logic empowers us and Love gives us purpose. Yet still phantoms restless for eras long past, congealed in the present in unthought forms, strive mightily unseen to destroy us. http://www.metamilk.com ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Why does Haskell have the if-then-else syntax?
Confusingly, if c then t else f Also works, although no-one really knows why. Actually, it doesn't work inside a `do' layout, Stefan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Why does Haskell have the if-then-else syntax?
On 7/26/06, Donn Cave [EMAIL PROTECTED] wrote: That looks to me like a different way to spell if then else, but maybe that's the answer to the question - conceptually, for every then there really is an else, however you spell it, and only in a procedural language does it make any sense to leave it implicit. The exception that proves the rule is else return () -, e.g., ... Strictly speaking that generalizes to any functional context where a generic value can be assigned to the else clause, but there don't tend to be that many other such contexts. Does that answer the question? I believe his question was why if-then-else is syntax, rather than the function he gave. Since haskell is non-strict, it doesn't need to be implemented as syntax (unlike, say, scheme, where it needs to be a special form/macro to avoid executing both branches). I imagine the answer is that having the syntax for it looks nicer/is clearer. if a b c could be more cryptic than if a then b else c for some values of a, b and c. -- Dan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why does Haskell have the if-then-else syntax?
On 7/27/06, Dan Doel [EMAIL PROTECTED] wrote: On 7/26/06, Donn Cave [EMAIL PROTECTED] wrote: That looks to me like a different way to spell if then else, but maybe that's the answer to the question - conceptually, for every then there really is an else, however you spell it, and only in a procedural language does it make any sense to leave it implicit. The exception that proves the rule is else return () -, e.g., ... Strictly speaking that generalizes to any functional context where a generic value can be assigned to the else clause, but there don't tend to be that many other such contexts. Does that answer the question? I believe his question was why if-then-else is syntax, rather than the function he gave. Since haskell is non-strict, it doesn't need to be implemented as syntax (unlike, say, scheme, where it needs to be a special form/macro to avoid executing both branches). I imagine the answer is that having the syntax for it looks nicer/is clearer. if a b c could be more cryptic than if a then b else c for some values of a, b and c. Also, you get less parenthesis: myAbs x = if x 0 then negate x else x /S -- Sebastian Sylvan +46(0)736-818655 UIN: 44640862 ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe