On Nov 3, 2007 5:00 AM, Ryan Dickie [EMAIL PROTECTED] wrote:
Lossless File compression, AKA entropy coding, attempts to maximize the
amount of information per bit (or byte) to be as close to the entropy as
possible. Basically, gzip is measuring (approximating) the amount of
information
Sebastian Sylvan [EMAIL PROTECTED] writes:
[LOC vs gz as a program complexity metric]
Obviously no simple measure is going to satisfy everyone, but I think the
gzip measure is more even handed across a range of languages.
It probably more closely aproximates the amount of mental effort [..]
On 11/2/07, Isaac Gouy [EMAIL PROTECTED] wrote:
Ketil Malde wrote:
[LOC vs gz as a program complexity metric]
Do either of those make sense as a program /complexity/ metric?
You're right! We should be using Kolmogorov complexity instead!
I'll go write a program to calculate it for the
On Friday 02 November 2007 19:03, Isaac Gouy wrote:
It's slightly interesting that, while we're happily opining about LOCs
and gz, no one has even tried to show that switching from LOCs to gz
made a big difference in those program bulk rankings, or even
provided a specific example that they
--- Jon Harrop [EMAIL PROTECTED] wrote:
On Friday 02 November 2007 19:03, Isaac Gouy wrote:
It's slightly interesting that, while we're happily opining about
LOCs
and gz, no one has even tried to show that switching from LOCs to
gz
made a big difference in those program bulk rankings,
On 11/2/07, Isaac Gouy [EMAIL PROTECTED] wrote:
How strange that you've snipped out the source code shape comment that
would undermine what you say - obviously LOC doesn't tell you anything
about how much stuff is on each line, so it doesn't tell you about the
amount of code that was written
--- Sebastian Sylvan [EMAIL PROTECTED] wrote:
-snip-
It still tells you how much content you can see on a given amount of
vertical space.
And why would we care about that? :-)
I think the point, however, is that while LOC is not perfect, gzip is
worse.
How do you know?
Best case
igouy2:
--- Sebastian Sylvan [EMAIL PROTECTED] wrote:
-snip-
It still tells you how much content you can see on a given amount of
vertical space.
And why would we care about that? :-)
I think the point, however, is that while LOC is not perfect, gzip is
worse.
How do you
On Friday 02 November 2007 20:29, Isaac Gouy wrote:
...obviously LOC doesn't tell you anything
about how much stuff is on each line, so it doesn't tell you about the
amount of code that was written or the amount of code the developer can
see whilst reading code.
Code is almost ubiquitously
while LOC is not perfect, gzip is worse.
the gzip change didn't significantly alter the rankings
Currently the gzip ratio of C++ to Python is 2.0, which at a glance,
wouldn't sell me on a less code argument. Although the rank stayed the
same, did the change reduce the magnitude of the victory?
On Friday 02 November 2007 23:53, Isaac Gouy wrote:
Best case you'll end up concluding that the added complexity had
no adverse effect on the results.
Best case would be seeing that the results were corrected against bias
in favour of long-lines, and ranked programs in a way that
--- Greg Fitzgerald [EMAIL PROTECTED] wrote:
while LOC is not perfect, gzip is worse.
the gzip change didn't significantly alter the rankings
Currently the gzip ratio of C++ to Python is 2.0, which at a glance,
wouldn't sell me on a less code argument.
a) you're looking at an average,
On 11/2/07, Sterling Clover [EMAIL PROTECTED] wrote:
As I understand it, the question is what you want to measure for.
gzip is actually pretty good at, precisely because it removes
boilerplate, reducing programs to something approximating their
complexity. So a higher gzipped size means, at
Don Stewart [EMAIL PROTECTED] writes:
goalieca:
So in a few years time when GHC has matured we can expect performance to
be on par with current Clean? So Clean is a good approximation to peak
performance?
If I remember the numbers, Clean is pretty close to C for most
benchmarks, so
Yes, that's right. We'll be doing a lot more work on the code generator in the
rest of this year and 2008. Here we includes Norman Ramsey and John Dias, as
well as past interns Michael Adams and Ben Lippmeier, so we have real muscle!
Simon
| I don't think the register allocater is being
I assume the reason the switched away from LOC is to prevent
programmers artificially reducing their LOC count, e.g. by using
a = 5; b = 6;
rather than
a = 5;
b = 6;
in languages where newlines aren't syntactically significant. When
gzipped, I guess that the ;\n string will be represented about
Bernie wrote:
I discussed this with Rinus Plasmeijer (chief designer of Clean) a
couple of years ago, and if I remember correctly, he said that the
native code generator in Clean was very good, and a significant
reason why Clean produces (relatively) fast executables. I think he
said
Neil wrote:
The Clean and Haskell languages both reduce to pretty much
the same Core language, with pretty much the same type system, once
you get down to it - so I don't think the difference between the
performance is a language thing, but it is a compiler thing. The
uniqueness type stuff may
On 01/11/2007, Simon Peyton-Jones [EMAIL PROTECTED] wrote:
Yes, that's right. We'll be doing a lot more work on the code generator in
the rest of this year and 2008. Here we includes Norman Ramsey and John
Dias, as well as past interns Michael Adams and Ben Lippmeier, so we have
real
| Subject: Re: [Haskell-cafe] Re: Why can't Haskell be faster?
|
| On 01/11/2007, Simon Peyton-Jones [EMAIL PROTECTED] wrote:
| Yes, that's right. We'll be doing a lot more work on the code generator in
the rest of this year and 2008.
| Here we includes Norman Ramsey and John Dias, as well
Ketil Malde wrote:
Python used to do pretty well here compared
to Haskell, with rather efficient hashes and text parsing, although I
suspect ByteString IO and other optimizations may have changed that
now.
It still does just fine. For typical munge a file with regexps, lists,
and maps
On 10/31/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
I didn't keep a copy, but if someone wants to retrieve it from the Google
cache and put it on the new wiki (under the new licence, of course), please
do so.
Cheers,
Andrew Bromage
Done:
Unfortunately, they replaced line counts with bytes of gzip'ed code --
while the former certainly has its problems, I simply cannot imagine
what relevance the latter has (beyond hiding extreme amounts of
repetitive boilerplate in certain languages).
Sounds pretty fair to me. Programming is a
On 01/11/2007, Tim Newsham [EMAIL PROTECTED] wrote:
Unfortunately, they replaced line counts with bytes of gzip'ed code --
while the former certainly has its problems, I simply cannot imagine
what relevance the latter has (beyond hiding extreme amounts of
repetitive boilerplate in certain
Quoting Justin Bailey [EMAIL PROTECTED]:
Done: http://www.haskell.org/haskellwiki/RuntimeCompilation . Please
update it as needed.
Thanks!
Cheers,
Andrew Bromage
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
On 31/10/2007, Peter Hercek [EMAIL PROTECTED] wrote:
Anyway, if Haskell would do some kind of whole program analyzes
and transformations it probably can mitigate all the problems
to a certain degree.
I think JHC is supposed to do whole-program optimisations. Rumour has
it that its Hello
On 31/10/2007, Peter Hercek [EMAIL PROTECTED] wrote:
Add to that better unbox / box annotations, this may make even
bigger difference than the strictness stuff because it allows
you to avoid a lot of indirect references do data.
Anyway, if Haskell would do some kind of whole program
Paulo J. Matos wrote:
So the slowness of Haskell (compared to Clean) is consequence of
its type system. OK, I'll stop, I did not write Clean nor Haskell
optimizers or stuff like that :-D
type system? Why is that? Shouldn't type system in fact speed up the
generated code, since it will
Paulo J. Matos wrote:
type system? Why is that? Shouldn't type system in fact speed up the
generated code, since it will know all types at compile time?
The *existence* of a type system is helpful to the compiler.
Peter was referring to the differences between haskell and clean.
On Wed, 31 Oct 2007 14:17:13 +
Jules Bean [EMAIL PROTECTED] wrote:
Specifically, clean's uniqueness types allow for a certain kind of
zero-copy mutation optimisation which is much harder for a haskell
compiler to automatically infer. It's not clear to me that it's
actually worth it, but
Robin Green wrote:
On Wed, 31 Oct 2007 14:17:13 +
Jules Bean [EMAIL PROTECTED] wrote:
Specifically, clean's uniqueness types allow for a certain kind of
zero-copy mutation optimisation which is much harder for a haskell
compiler to automatically infer. It's not clear to me that it's
Hi
I've been working on optimising Haskell for a little while
(http://www-users.cs.york.ac.uk/~ndm/supero/), so here are my thoughts
on this. The Clean and Haskell languages both reduce to pretty much
the same Core language, with pretty much the same type system, once
you get down to it - so I
ndmitchell:
Hi
I've been working on optimising Haskell for a little while
(http://www-users.cs.york.ac.uk/~ndm/supero/), so here are my thoughts
on this. The Clean and Haskell languages both reduce to pretty much
the same Core language, with pretty much the same type system, once
you get
So in a few years time when GHC has matured we can expect performance to be
on par with current Clean? So Clean is a good approximation to peak
performance?
--ryan
On 10/31/07, Don Stewart [EMAIL PROTECTED] wrote:
ndmitchell:
Hi
I've been working on optimising Haskell for a little while
goalieca:
So in a few years time when GHC has matured we can expect performance to
be on par with current Clean? So Clean is a good approximation to peak
performance?
The current Clean compiler, for micro benchmarks, seems to be rather
good, yes. Any slowdown wrt. the same program
On 31/10/2007, Don Stewart [EMAIL PROTECTED] wrote:
goalieca:
So in a few years time when GHC has matured we can expect performance to
be on par with current Clean? So Clean is a good approximation to peak
performance?
The current Clean compiler, for micro benchmarks, seems to
Hi
So in a few years time when GHC has matured we can expect performance to be
on par with current Clean? So Clean is a good approximation to peak
performance?
No. The performance of many real world programs could be twice as fast
at least, I'm relatively sure. Clean is a good short term
On 10/31/07, Neil Mitchell [EMAIL PROTECTED] wrote:
in the long run Haskell should be aiming for equivalence with highly
optimised C.
Really, that's not very ambitious. Haskell should be setting its
sights higher. :-)
When I first started reading about Haskell I misunderstood what
currying was
On Wed, 31 Oct 2007, Dan Piponi wrote:
But every day, while coding at work (in C++), I see situations where
true partial evaluation would give a big performance payoff, and yet
there are so few languages that natively support it. Of course it
would require part of the compiler to be present
There are many ways to implement currying. And even with GHC you can get it
to do some work given one argument if you write the function the right way.
I've used this in some code where it was crucial.
But yeah, a code generator at run time is a very cool idea, and one that has
been studied, but
On Wed, 2007-10-31 at 23:44 +0100, Henning Thielemann wrote:
On Wed, 31 Oct 2007, Dan Piponi wrote:
But every day, while coding at work (in C++), I see situations where
true partial evaluation would give a big performance payoff, and yet
there are so few languages that natively support
On Wed, Oct 31, 2007 at 03:37:12PM +, Neil Mitchell wrote:
Hi
I've been working on optimising Haskell for a little while
(http://www-users.cs.york.ac.uk/~ndm/supero/), so here are my thoughts
on this. The Clean and Haskell languages both reduce to pretty much
the same Core language,
Hi
I don't think the register allocater is being rewritten so much as it is
being written:
From talking to Ben, who rewrote the register allocator over the
summer, he said that the new graph based register allocator is pretty
good. The thing that is holding it back is the CPS conversion bit,
On Thu, Nov 01, 2007 at 02:30:17AM +, Neil Mitchell wrote:
Hi
I don't think the register allocater is being rewritten so much as it is
being written:
From talking to Ben, who rewrote the register allocator over the
summer, he said that the new graph based register allocator is
On 01/11/2007, at 2:37 AM, Neil Mitchell wrote:
My guess is that the native code generator in Clean beats GHC, which
wouldn't be too surprising as GHC is currently rewriting its CPS and
Register Allocator to produce better native code.
I discussed this with Rinus Plasmeijer (chief designer
G'day all.
Quoting Derek Elkins [EMAIL PROTECTED]:
Probably RuntimeCompilation (or something like that and linked from the
Knuth-Morris-Pratt implementation on HaWiki) written by Andrew Bromage.
I didn't keep a copy, but if someone wants to retrieve it from the Google
cache and put it on the
46 matches
Mail list logo