Re: Inferring from context declarations

2001-02-22 Thread Lennart Augustsson

Thomas Johnsson wrote:

 Lennart Augustsson writes:
Simon Peyton Jones' comments about dictionary passing are a red herring,
since they assume a particular form of compiler.  Various (MLj, MLton)
ML compilers already inline out all polymorphism.
   ML is a language where you can do this.  In Haskell it is not always
   possible to eliminate all polymorphism (due to polymorphic recursion).

 I'd like to see an example of this!

Sure, here's one where you can't bound the number of dictionaries:

main = interact $ show . f 'a' . read

f :: (Eq a) = a - Integer - Bool
f x 0 = x == x
f x n = f [x] (n-1)


If the input is 0 the comparison is of Char,
if the input is 1 the comparison is of [Char],
if the input is 2 the comparison is of [[Char]],
etc.

Incidentally, this has nothing to do with allowing polymorphic recursion
on functions in Haskell.  It could be done earlier too, but then it had
to be encoded using a class and instance declaration.

-- Lennart



___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Inferring from context declarations

2001-02-22 Thread Andreas Rossberg

Lennart Augustsson wrote:
 
 Incidentally, this has nothing to do with allowing polymorphic recursion
 on functions in Haskell.  It could be done earlier too, but then it had
 to be encoded using a class and instance declaration.

I would argue that methods are in fact polymorphically recursive
functions. Wasn't this one motivation to allow general polymorphic
recursion in Haskell - that it is in the language anyway?

- Andreas

-- 
Andreas Rossberg, [EMAIL PROTECTED]

"Computer games don't affect kids.
 If Pac Man affected us as kids, we would all be running around in
 darkened rooms, munching pills, and listening to repetitive music."

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Inferring from context declarations

2001-02-21 Thread Lennart Augustsson

 Simon Peyton Jones' comments about dictionary passing are a red herring,
 since they assume a particular form of compiler.  Various (MLj, MLton)
 ML compilers already inline out all polymorphism.
ML is a language where you can do this.  In Haskell it is not always
possible to eliminate all polymorphism (due to polymorphic recursion).

-- Lennart

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



RE: Inferring from context declarations

2001-02-21 Thread Mark P Jones

| [One way to compile polymorphic code is to inline all uses of
| polymorphic values...]
| 
| It would require to keep bodies of all polymorphic functions in
| a form allowing instantiation in different modules. For example
| ghc already requires too much memory (even a hundred MB for large
| modules), and it leads to code bloat, so in the current state of
| technology it's not advisable to always compile out polymorphism.

Separate compilation is an issue here, but I'm not at all sure that
code bloat would be a problem in practice.  I say this on the basis
of some experiments that I did (a long time ago now), which you can
find described in my paper:

   Dictionary-free Overloading by Partial Evaluation
   http://www.cse.ogi.edu/~mpj/pubs/pepm94.html
   Be sure to get the (longer) Yale Technical report version,
   not the PEPM paper, which omits some specific material that
   is relevant to this discussion.

The primary purpose of this paper was to investigate what would happen
if overloading was implemented by generating specialized versions of
each overloaded function.  (In other words, by using a specialized form
of partial evaluation to eliminate the need for dictionaries.)  In every
single case that I tried, specialization resulted in *smaller* programs:
although some functions were duplicated, others---which had previously
sat unreferenced inside dictionaries---could now be identified (and
hence removed) as dead code.

I also experimented to see what kind of effect specialization might
have if used to eliminate polymorphism.  (You need to read the Technical
Report version of the paper for this.)  The results of that suggested
that the resulting programs might contain perhaps twice as many functions
as the original.  But note:

 1) Polymorphic functions tend to be quite small (intuitively, the
bigger the code for a function gets, the more constrained its
type becomes).  Because these are the functions whose bodies are
duplicated, a doubling in the number of functions may result in
something much less than a doubling in overall code size.

 2) Small functions are obvious candidates for inlining, so worries
about code bloat may not be realized because the code for many
of these functions is already being duplicated in existing,
inlining compilers.

 3) It may not be necessary to have distinct versions of the code
for a polymorphic function at every possible type; instead,
the implementations at different types could share the same
code.  For example, if, at the lowest level, integers and
list values are represented as 32 bit quantities, then the
implementations for the identity function on Int and for the
identity function on lists could well be the same.  In other
words, we might only need to index implementations on the size
of polymorphic type parameters, and not on the parameters
themselves.  This, I believe, would address the problems that
Lennart described involving polymorphic recursion.  It might
also help to solve the separate compilation issue.

All the best,
Mark


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Inferring from context declarations

2001-02-21 Thread George Russell

Andreas Rossberg wrote:
 
 George Russell wrote:
 
  I'm sorry if this is a FAQ, but I'm curious to know why Haskell (or at least, GHC 
and Hugs)
  doesn't seem able to use contexts on variables at the front of data declarations.
 
 There has been some discussion on contexts on data declarations during
 the Haskell 98 standardization process. See:
 
 http://www.cs.chalmers.se/~rjmh/Haskell/Messages/Display.cgi?id=302
[snip]
I see that normal mortals, being only users, are not permitted to contribute
to this particular bulletin board.  But I would like to point out that
(1) Even in their current form, contexts in data declaration serve a useful
purpose, since they provide and enforce a form of documentation.
(2) If the context in a data declaration were made publicly available, it would
be even more useful, for all the good reasons given.
(3) Simon Peyton Jones' comments about dictionary passing are a red herring,
since they assume a particular form of compiler.  Various (MLj, MLton)
ML compilers already inline out all polymorphism. Some C++ compilers/linkers
do it in a rather crude way as well, for templates.  If you can do it,
you can forget about dictionary passing.  The other benefits are so great
(strict types can always be unboxed for one thing) that I strongly suspect
that in ten years or so, when machines are fast enough, this will be the
norm for producing production code.
   For the next few years, or when compilation speed is more important than
run-time speed, or for dynamic code loading, we can expect at least some
code to remain polymorphic when compiled.  But in all these cases, I don't 
think the extra cost of dictionary passing to be significant.

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Inferring from context declarations

2001-02-21 Thread George Russell

Andreas Rossberg wrote:
[snip]
 Such monomorphisation is not possible for Haskell in general, because it
 allows polymorphic recursion. As a consequence, the number of
 dictionaries constructed for a given program also is potentially
 infinite.
[snip]
Yes OK, that's another case, like dynamically-loaded code, where you can't
inline everything.  You could keep the existing mechanisms (dictionaries and all) 
for a polymorphic call which was not inlined (this will be slow, but I doubt
if many programs spend much of their time in functions that use polymorphic
recursion), or if you have unlimited programming time, you could do the
monomorphisation on-demand in the run-time-system.

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Inferring from context declarations

2001-02-21 Thread D. Tweed

George Russell wrote:
 
 (3) Simon Peyton Jones' comments about dictionary passing are a red herring,
 since they assume a particular form of compiler.  Various (MLj, MLton)
 ML compilers already inline out all polymorphism. Some C++ compilers/linkers
 do it in a rather crude way as well, for templates.  If you can do it,
 you can forget about dictionary passing.

[Standard disclaimer: I write prototype code that's never `finished' to
ever-changing specs in a university environment; other people probably
view things differently.]

I'm not sure I'd agree about this. Note that there's two levels, inlining
polymorphic functions at the call site and `instantiating polymorphic
functions at each usage type' without doing the inlining. C++ compilers
have to at least do the second because of the prevailing philosophy of
what templates are (i.e., that they're safer function-macros). Some of the
time this is what's wanted, but sometimes it imposes annoying compilation
issues (the source code of the polymorphic function has to be available
everytime you want to use the function on a new class, even if its not
time critical, which isn't the case for Haskell). I also often
write/generate very large polymorphic functions that in an ideal world
(where compilers are can do _serious, serious_ magic) I'd prefer to work
using something similar to a dictionary passing implementation. I'd argue
that keeping flexibility about polymorphic function implementation (which
assumes some default but can be overridden by the programmer) in Haskell
compilers is a Good Thing.

Given that, unless computing hardware really revolutionises, the
`speed/memory' profile of todays desktop PC is going to recurr in wearable
computers/PDAs/etc I believe that in 20 years time we'll still be figuring
out the same trade-offs, and so need to keep flexibility.

___cheers,_dave
www.cs.bris.ac.uk/~tweed/pi.htm|tweed's law:  however many computers
email: [EMAIL PROTECTED] |you have, half your time is spent
work tel: (0117) 954-5250  |waiting for compilations to finish.



___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe