Re: The dreaded M-R

2006-02-04 Thread Ben Rudiak-Gould
Having thought about this for a while I'm coming down on the side of keeping 
some sort of monomorphism restriction, for the following reason.


It's hard to bound the space consumption of a Haskell program, but easy to 
bound its time consumption: its asymptotic runtime will be the same as or 
better than an "equivalent" ML program, because each bit of code gets 
evaluated at most once for every time that ML would evaluate it. Even 
optimistic evaluation preserves this property. Dropping the monomorphism 
restriction would break it.


I understand the theoretical appeal of polymorphism by default, but there's 
also a lot of theoretical appeal to dropping thunk updating, and in practice 
I dread both ideas. I don't want to have to think, every time I bind a 
variable, about whether it might be evaluated more than once. I also don't 
want to have to track down multiple-evaluation cases with runtime profiling. 
I want the compiler to notice these problems for me. So if polymorphism were 
the default, I would end up defensively requesting monomorphism almost all 
the time.


Therefore I'm strongly opposed to any proposal that requires a type 
signature for monomorphism, because it would create a large amount of work 
for me. I'm moderately opposed to the (x) = ... proposal, because it would 
uglify my code. I'm moderately in favor of John Hughes's original := 
proposal, because it seems less ugly.


Now, I rarely run into the monomorphism restriction as it stands because I 
rarely write point-free code. And I suspect this is what the debate is 
really about: it's the point-free people who usually want polymorphism 
versus the, uh, pointwise people who usually don't. For myself I'd be 
happiest keeping the monomorphism restriction as it now stands, but maybe 
John Hughes's proposal is the best compromise between the two camps 
(assuming they exist outside my imagination).


-- Ben

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-02-03 Thread Scott Turner
Following the helpful call to attend to priorities, I reluctantly return to 
the M-R discussion. I believe a point has been missed that should be a part 
of this thread.

On 2006 January 30, Josef Svenningsson wrote:
> But the most important argument against M-R hasn't been put forward yet.
>
> Haskell is, and has always been, a non-strict language. *Not* a lazy
> language.

That is correct, but it is not a welcome argument. Haskell's unspecified 
evaluation order is elegant, and gives implementers a happy flexibility. But 
Haskell has no need to allow innovative experiments within the report.

On the contrary, practical Haskell programs and libraries rely on sharing. 
Without sharing, the humble Fibonacci example takes exponential time. If the 
report were to clearly mandate the sharing that nearly everyone counts on, it 
would be a benefit. The := syntax suggested by John Hughes is an obvious 
point at which sharing could be mandated. 

The wiki page 
http://hackage.haskell.org/trac/haskell-prime/wiki/MonomorphismRestriction
counts "introducing a concept of sharing into the report" as a negative. In 
the larger context of bolstering Haskell's support for mainstream 
applications, sharing is worthwhile.
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Test performance impact (was: The dreaded M-R)

2006-02-02 Thread Andrew Pimlott
On Thu, Feb 02, 2006 at 12:34:30PM -, Simon Marlow wrote:
> Still, you could argue that it doesn't actually tell you the cause of
> the problem: namely that i&j are being evaluated twice as often as you
> might expect by looking at the code.

Would not the "entries" count in the profile tip you off to this?  The
entries for i should be twice that for accumCharge, right?

Andrew
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re[2]: Test performance impact (was: The dreaded M-R)

2006-02-02 Thread Bulat Ziganshin
Hello John,

Thursday, February 02, 2006, 12:51:58 PM, you wrote:

JH> Let me make clear that what concerns me is not the impact of the M-R on
JH> space and time
JH> performance on average. What concerns me is the difficulty of debugging 
JH> performance
JH> problems.

may be it's better in such case to add to the compiler _optional_
warning about possible performance problems specially for programmers
that seek such holes. and Haskell by itself will be elegant high-order
language which don't bites newcomers with this really rare problem



-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]



___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


RE: Test performance impact (was: The dreaded M-R)

2006-02-02 Thread Simon Marlow
On 02 February 2006 09:52, John Hughes wrote:

>   Summary: 2 programs failed to compile due to type errors (anna,
gg).
>   One program did 19% more allocation, a few other programs
increased
>   allocation very slightly (<2%).
> 
>   pic +0.28%   +19.27%  0.02
> 
> 
> 
> Thanks, that was interesting. A follow-up question: pic has a space
> bug. How long will it take you to find and fix it?

I just tried this, and it took me just a few minutes.  Compiling both
versions with profiling, for the original:

total time  =0.00 secs   (0 ticks @ 20 ms)
total alloc =  11,200,656 bytes  (excludes profiling overheads)

COST CENTREMODULE   %time %alloc

chargeDensity  ChargeDensity  0.02.5
accumChargeChargeDensity  0.0   13.5
relax  Potential  0.0   31.4
correctPotential  0.05.0
genRandUtils  0.01.0
fineMesh   Utils  0.02.4
applyOpToMesh  Utils  0.0   12.7
=: Utils  0.02.3
pushParticle   PushParticle   0.0   16.1
timeStep   Pic0.0   11.0


and with the monomorphism restriction turned off:


total time  =0.02 secs   (1 ticks @ 20 ms)
total alloc =  12,893,544 bytes  (excludes profiling overheads)

COST CENTREMODULE   %time %alloc

pushParticle   PushParticle 100.0   20.8
chargeDensity  ChargeDensity  0.02.2
accumChargeChargeDensity  0.0   18.0
relax  Potential  0.0   27.3
correctPotential  0.04.4
fineMesh   Utils  0.02.1
applyOpToMesh  Utils  0.0   11.1
=: Utils  0.02.0
timeStep   Pic0.09.5


So, ignoring the %time column (the program didn't run long enough for
the profiler to get enough time samples), we can see the following
functions increased their allocation as a % of the total:

  pushParticle, accumCharge

Looking at the code for accumCharge:

accumCharge :: [Position] -> [MeshAssoc]
accumCharge [] = []
accumCharge ((x,y):xys) =
[((i ,j ) , charge * (1-dx) * (1-dy))] ++
[((i',j ) , charge * dx * (1-dy))] ++
[((i ,j') , charge * (1-dx) * dy)] ++
[((i',j') , charge * dx * dy)] ++
accumCharge xys
where
i = truncate x
i' = (i+1) `rem` nCell
j = truncate y
j' = (j+1) `rem` nCell
dx = x - fromIntegral i
dy = y - fromIntegral j

Now, because I know what I'm looking for, I can pretty quickly spot the
problem.  I had to look at the definition of MeshAssoc to figure out
that the result type of this function forces i to have type Int, yet it
is used elsewhere as the argument to fromIntegral, where if i is
overloaded will be defaulted to Integer.  When I give type signatures to
i and j (:: Int), the allocation reduces.

The pushParticle function has an identical pattern.  Fixing these two
functions brought the performance back to the original.  But I've also
changed the semantics - the author might have *wanted* i at type Integer
in the definition of dx to avoid overflow, and the monomorphism
restriction had prevented it.

I suppose you could ask how you'd find the problem if you didn't know
what to look for.  So I added some more annotations:

i = {-# SCC "i" #-} truncate x
i' = {-# SCC "i'" #-}  (i+1) `rem` nCell
j = {-# SCC "j" #-} truncate y
j' = {-# SCC "j'" #-} (j+1) `rem` nCell
dx = {-# SCC "dx" #-} x - fromIntegral i
dy = {-# SCC "dy" #-} y - fromIntegral j

and the profiling output shows:

i  ChargeDensity100.06.8
j  ChargeDensity  0.06.8
chargeDensity  ChargeDensity  0.02.2
accumChargeChargeDensity  0.03.9
relax  Potential  0.0   27.2
...

So this pretty clearly identifies the problem area (although the figures
don't quite add up, I suspect the insertion of the annotations has
affected optimisation in some way).

Still, you could argue that it doesn't actually tell you the cause of
the problem: namely that i&j are being evaluated twice as often as you
might expect by looking at the code.  This is what the compiler warning
would do, and I completely agree that not having this property evident
by looking at the source code is a serious sh

Re: Test performance impact (was: The dreaded M-R)

2006-02-02 Thread Malcolm Wallace
John Hughes <[EMAIL PROTECTED]> writes:

> Summary: 2 programs failed to compile due to type errors (anna, gg).
> One program did 19% more allocation, a few other programs increased
> allocation very slightly (<2%).
> 
> pic +0.28%   +19.27%  0.02
> 
>   
> 
> Thanks, that was interesting. A follow-up question: pic has a space bug. 
> How long will it take you to find and fix it?

Purely by looking at the code, without using automated help, about
20 minutes.

For reference, the overloaded local constants are:
i,j in PushParticle.pushParticle
i,j in ChargeDensity.accumCharge
In both cases, they need a type signature :: Indx

Regards,
Malcolm
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


RE: Test performance impact (was: The dreaded M-R)

2006-02-02 Thread John Hughes





  Summary: 2 programs failed to compile due to type errors (anna, gg).
One program did 19% more allocation, a few other programs increased
allocation very slightly (<2%).

pic +0.28%   +19.27%  0.02

  

Thanks, that was interesting. A follow-up question: pic has a space
bug. How long will it take you
to find and fix it? And how come speed improved slightly in many
cases--that seems counter-
intuitive.

Let me make clear that what concerns me is not the impact of the M-R on
space and time 
performance on average. What concerns me is the difficulty of debugging
performance
problems. Even if we Haskell programmers usually consider correctness
first and
performance later, when we want to actually use our beautiful code, we
need to find and
fix at least the worst performance problems.

At that point, I'm reading the code and trying to understand--with the
help of profiling
tools--why it performs the way it does. I need the performance
behaviour of the code
to be *obvious*. When I pull out a common sub-_expression_ into a
let-binding, I want
to *know* that it will be evaluated at most once. When I see a
collection of variable
bindings, likewise, I want to *know* that they are computed once only.
When I see
a seq on such a variable, I want to *know* that any pointers in its
unevaluated form
are thereby dropped. When I introduce or remove overloading in a
function definition,
I want to *know* that I have not thereby changed the sharing properties
of a variable
binding somewhere else, potentially introducing a space or time leak. 
I want to know 
these things by looking at the code in front of me, not by running the
compiler with the
right flags and reading its warning messages. Unexpected behaviour at
this point is a trap, 
that can lead to many wasted hours as one reads the same code again and
again, assuming it
behaves as expected, only to eventually convince oneself, by long
examination of profiles, 
that it cannot be doing so. If the unexpected behaviour is rare, that
only means that when it 
*does* happen, it will take all the longer to find and fix. I find
languages with such traps 
intensely frustrating to work with. If I can't even tell, *by reading
it*, what my code means, 
then what hope have I got?

There are plenty who will argue, quite reasonably, that lazy evaluation
in itself makes
understanding performance behaviour unreasonably difficult; that strict
languages
should be preferred because the space and time behaviour is much more
obvious.
I don't agree, because I find the software engineering benefits of lazy
evaluation to
be so large, that it's worth the cost in making performance reasoning
more difficult
--but I accept there is a trade-off there. I don't see benefits on
anything like the same scale
from dropping the M-R.

Today's M-R has the merit that it at least makes it evident, in the
code, where
sharing may be lost. It's often a minor irritant, occasionally a bit
more than minor.
Were it to be removed with no replacement, then the irritation would go
away, and
the signs are that most of the time, performance would not change much.
But 
occasionally--just occasionally--there would be a space or time bug
caused by
loss of sharing *somewhere* in a large body of code, and searching for
it, we
would descend into nightmare. I'll put up with the irritation any day,
to be
sure of being spared the nightmare.

John


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-02-01 Thread Andrew Pimlott
On Wed, Feb 01, 2006 at 10:32:26AM -, Simon Marlow wrote:
> On 31 January 2006 17:48, Andrew Pimlott wrote:
> > This indicates that the warning "wouldn't happen much" _when you want
> > sharing_.  But it would happen all the time when you don't want
> > sharing, eg. in the case Malcolm Wallace just posted.
> 
> So you either add a type signature, or turn off the warning.

Right (provided you understand the warning).  Though I don't think many
people would turn off a default warning--it's just too much trouble.  (I
always add dummy definitions of coarbitrary to my Arbitrary instances,
rather than figure out how to suppress the undefined method warning.)
So in practice, I think most people would add a type signature.

> What's the problem?  

I've learned from this sub-thread that people oppose the M-R for at
different reasons.  Some dislike it on principle, or on aesthetic
grounds, and they would be happy to add a type signature.  But others
think it would be useful to define polymorphic variables without a type
signature, and they would not be happy to add one.

> I suspect you're saying that you don't want a warning by default, and
> you don't want the langage to recommend that compilers issue a warning
> by default, right?  If so, your objection is duly noted and I'll add the
> point to the wiki.

Cool.

Andrew
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Test performance impact (was: The dreaded M-R)

2006-02-01 Thread John Meacham
I think that given these results, I would have absolutely no issues with
dropping the MR completely. in fact, I'd recommend it.

If we must do something I don't think it is worth eating an operator for
a new type of binding, but some shorthand syntax

(x) = foo 

being sugar for the equivalent of

(_,x) = (undefined,foo)

would not be objectionable.

John



-- 
John Meacham - ⑆repetae.net⑆john⑈ 
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


RE: Test performance impact (was: The dreaded M-R)

2006-02-01 Thread Simon Marlow
On 01 February 2006 11:42, Nils Anders Danielsson wrote:

> However, to stand on more solid ground I suggest that someone runs
> some performance tests, with and without
> -fno-monomorphism-restriction, to see whether the M-R has any great
> impact in practice. There are some performance test suites based on
> real code out there, right? Nofib?

Results below (ignore runtime differences, I was averaging over 5 runs
but most of these benchmarks run too quickly to get any reasonable
measurements).

Summary: 2 programs failed to compile due to type errors (anna, gg).
One program did 19% more allocation, a few other programs increased
allocation very slightly (<2%).



Program   SizeAllocs   Runtime


   anna  - - -
   ansi +0.00%+0.00%  0.00
   atom +0.00%+0.00%-2.92%
 awards +0.00%+0.00%  0.00
 banner +0.00%+0.00%  0.00
 bernouilli -0.23%+0.00%-7.44%
  boyer +0.00%+0.00%  0.00
 boyer2 +0.00%+0.00%  0.01
   bspt +0.00%+0.00%  0.02
  cacheprof +0.14%+0.00%-9.86%
   calendar +0.00%+0.00%  0.00
   cichelli +0.00%+0.00%+0.99%
circsim +0.00%+0.00%-0.93%
   clausify +0.00%+0.00%  0.01
  comp_lab_zift +0.00%+0.00%+1.86%
   compress +0.00%+0.00%-5.58%
  compress2 +0.00%+0.00%-7.63%
constraints +0.00%+0.00%-9.03%
   cryptarithm1 +0.00%+0.00%-6.27%
   cryptarithm2 +0.00%+0.00%  0.03
cse +0.00%+0.00%  0.00
  eliza +0.00%+0.00%  0.00
  event +0.00%+0.00%-7.04%
 exp3_8 +0.00%+0.00%   -10.84%
 expert +0.00%+0.00%  0.00
fem +0.35%+1.76%  0.06
fft +0.00%+0.00%  0.08
   fft2 +0.00%+0.00%  0.15
   fibheaps +0.00%+0.00%  0.08
   fish +0.00%+0.00%  0.03
  fluid +0.12%+0.04%  0.02
 fulsom -0.14%+0.00%-2.10%
 gamteb +2.47%+0.56%  0.15
gcd +0.00%+0.00%-5.88%
gen_regexps +0.00%+0.00%  0.00
 genfft +0.00%+0.00%  0.04
 gg  - - -
   grep +0.00%+0.00%  0.00
 hidden +0.00%+0.00%   -12.48%
hpg +0.00%+0.00%-7.92%
ida +0.00%+0.00%   -12.50%
  infer +0.00%+0.00%  0.13
integer +0.00%+0.00%-9.81%
  integrate +0.00%+0.00%-5.47%
knights +0.19%+0.00%  0.01
   lcss +0.00%+0.00%-7.42%
   life +0.75%+0.00%   -16.22%
   lift +0.00%+0.00%  0.00
  listcompr +0.00%+0.00%-9.76%
   listcopy +0.00%+0.00%-8.33%
   maillist +0.00%+0.03%  0.06
 mandel +0.00%+0.00%+0.00%
mandel2 +0.00%+0.00%  0.02
minimax +0.00%+0.00%  0.01
mkhprog +0.00%+0.00%  0.00
 multiplier +0.00%+0.00%   -10.43%
   nucleic2 +0.61%  (stdout)  (stdout)
   para +0.00%+0.00%-2.51%
  paraffins +0.00%+0.00%   -10.07%
 parser +0.00%+0.00%  0.06
parstof +0.00%+0.00%  0.01
pic +0.28%   +19.27%  0.02
  power +0.29%+0.00%   -11.62%
 pretty +0.00%+0.00%  0.00
 primes +0.00%+0.00%  0.13
  primetest +0.00%+0.00%+0.77%
 prolog +0.00%+0.00%  0.00
 puzzle +0.00%+0.00%-5.03%
 queens +0.00%+0.00%  0.06
reptile +0.00%+0.00%  0.03
rewrite +0.00%+0.00%  0.00
   rfib +0.00%+0.00%  0.04
rsa +0.00%+0.00%  0.06
scc +0.00%+0.00%  0.00
  sched +0.00%+0.00%  0.05
scs +0.54%+0.06%   -11.14%
 simple +1.70%+0.00%-4.68%
  solid +0.00%+0.00%-2.03%
sorting +0.00%+0.00%  0.00
 sphere +0.06%+0.00%-1.74%
 symalg +0.00%  

M-R: Test performance impact (was: The dreaded M-R)

2006-02-01 Thread Nils Anders Danielsson
On Mon, 30 Jan 2006, "Simon Marlow" <[EMAIL PROTECTED]> wrote:

> Given the new evidence that it's actually rather hard to demonstrate any
> performance loss in the absence of the M-R with GHC, I'm attracted to
> the option of removing it in favour of a warning.

I also want to remove the M-R, because of various reasons that have
been mentioned before.

However, to stand on more solid ground I suggest that someone runs
some performance tests, with and without
-fno-monomorphism-restriction, to see whether the M-R has any great
impact in practice. There are some performance test suites based on
real code out there, right? Nofib?

-- 
/NAD

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


RE: The dreaded M-R

2006-02-01 Thread Simon Marlow
On 31 January 2006 17:48, Andrew Pimlott wrote:

> On Tue, Jan 31, 2006 at 10:17:57AM -, Simon Marlow wrote:
>> On 30 January 2006 21:49, Andrew Pimlott wrote:
>>> In the present case, people aren't (only) opposing the M-R out of
>>> principle, but because they actually use overloaded variable
>>> definitions and (at least sometimes) want to leave off the
>>> signature. 
>>> 
>>> So I don't see how one could claim, as on the wiki, the warning
>>> "wouldn't happen much".  I suspect it would happen, and annoy
>>> people, and defeat the reason that people want to remove the M-R.
>> 
>> The assertion that it "wouldn't happen much" is based on the
>> observation earlier in this thread that it was actually difficult to
>> write some code that illustrated the problem.
> 
> This indicates that the warning "wouldn't happen much" _when you want
> sharing_.  But it would happen all the time when you don't want
> sharing, eg. in the case Malcolm Wallace just posted.

So you either add a type signature, or turn off the warning.  What's the
problem?  

I suspect you're saying that you don't want a warning by default, and
you don't want the langage to recommend that compilers issue a warning
by default, right?  If so, your objection is duly noted and I'll add the
point to the wiki.

Cheers,
Simon
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-31 Thread Ben Rudiak-Gould

John Meacham wrote:

interestingly enough, the monomorphism restriction in jhc actually
should apply to all polymorphic values, independently of the type class
system.

x :: a 
x = x 


will transform into something that takes a type parameter and is hence
not shared.


Interesting. I'd been wondering how you dealt with this case, and now it 
turns out that you don't. :-)



I doubt this will cause a problem in practice since there
arn't really any useful values of type forall a . a other than bottom.


It could become an issue with something like

  churchNumerals :: [(a -> a) -> (a -> a)]
  churchNumerals = ...

Maybe you could use a worker-wrapper transformation.

  churchNumerals' :: [(T -> T) -> (T -> T)]
  churchNumerals' = ...

  churchNumerals :: [(a -> a) -> (a -> a)]
  churchNumerals = /\ a . unsafeCoerce churchNumerals'

The unsafeCoerce is scary, but it feels right to me. There is something 
genuinely unsavory about this kind of sharing, in Haskell or any other ML 
dialect. At least here it's out in the open.


-- Ben

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-31 Thread Ben Rudiak-Gould

Josef Svenningsson wrote:

There are many evaluation strategies one can use to implement Haskell.
The problem with the M-R is that it is a concern only in *some* of these 
evaluation strategies, most notably lazy evaluation.


True, but it's a concern in any evaluation strategy that tries to avoid
multiple evaluation of let-bound expressions, which includes lazy,
optimistic, and eager evaluation. A strict dialect of ML with type classes
would face the same problems.


If you read the motivation section which defines the M-R [...] the report
suddenly starts to talk about how many times a certain a certain 
expression is evaluated. But nowhere in the report is it defined how 
expressions should be evaluated. This makes the M-R truly butt-ugly!


I agree, but you don't have to specify lazy evaluation in order to justify
the M-R. Some sort of nondeterministic graph reduction semantics would be
good enough.

-- Ben

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-31 Thread Andrew Pimlott
On Tue, Jan 31, 2006 at 10:17:57AM -, Simon Marlow wrote:
> On 30 January 2006 21:49, Andrew Pimlott wrote:
> > In the present case, people aren't (only) opposing the M-R out of
> > principle, but because they actually use overloaded variable
> > definitions and (at least sometimes) want to leave off the signature.
> >
> > So I don't see how one could claim, as on the wiki, the warning
> > "wouldn't happen much".  I suspect it would happen, and annoy people,
> > and defeat the reason that people want to remove the M-R.
> 
> The assertion that it "wouldn't happen much" is based on the observation
> earlier in this thread that it was actually difficult to write some code
> that illustrated the problem.

This indicates that the warning "wouldn't happen much" _when you want
sharing_.  But it would happen all the time when you don't want sharing,
eg. in the case Malcolm Wallace just posted.

Andrew
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-31 Thread Malcolm Wallace
John Hughes <[EMAIL PROTECTED]> writes:

> Ambiguous type variable `a' in the constraint:
>   `Ord a' arising from use of `Data.Set.insert' at Pretty.hs:28:11-20
> Possible cause: the monomorphism restriction applied to the following:
>   addToSet :: a -> Data.Set.Set a -> Data.Set.Set a
> Probable fix: give these definition(s) an explicit type signature
>   or use -fno-monomorphism-restriction
> 
> Well, is it OK? From the type-checker's point of view, yes, But have you
> lost sharing?

Yes, I have lost sharing, but then again, sharing is impossible here
anyway.  In fact, the monomorphism restriction wanted to force sharing,
but the types indicated it was not possible, hence the error.  :-)

What the M-R was complaining about, is my having eta-reduced a
definition into completely point-free style.

Regards,
Malcolm
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-31 Thread John Hughes



   "Simon Marlow" <[EMAIL PROTECTED]> writes:


Given the new evidence that it's actually rather hard to demonstrate any
performance loss in the absence of the M-R with GHC, I'm attracted to
the option of removing it in favour of a warning.
   



   As another data point, today for the first time I received an error
   (not a warning) from ghc about the M-R:

   Ambiguous type variable `a' in the constraint:
   `Ord a' arising from use of `Data.Set.insert' at Pretty.hs:28:11-20
   Possible cause: the monomorphism restriction applied to the following:
   addToSet :: a -> Data.Set.Set a -> Data.Set.Set a (bound at
   Pretty.hs:28:0)
   Probable fix: give these definition(s) an explicit type signature
   or use -fno-monomorphism-restriction

   So, without the M-R or a type signature, my code is OK. The proposal
   to accept this code but produce an optional warning is (I think)
   better than the current error.

   Regards,
   Malcolm


Well, is it OK? From the type-checker's point of view, yes, But have you lost
sharing? Have you introduced a space leak, because a seq on one of the 
occurrences
of your variable only forces one instance? Those are the dangers of following 
the
advice to put a type signature in.

John

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-31 Thread Johannes Waldmann
Lennart Augustsson wrote:

> All that said, I think not being able to give a name to a context is
> a real weakness in Haskell.  It's one of the few things that cannot
> be named, and being able to do so would help refactoring and modularity.

definitely. Currently we have to simulate that by

class ( C1 , C2 , ... ) => Context
instance ( C1, C2 , .. ) => Context

(is it equivalent? I hope so)
but it is awkward to duplicate information like that.

best regards,
-- 
-- Johannes Waldmann -- Tel/Fax (0341) 3076 6479/80 --
 http://www.imn.htwk-leipzig.de/~waldmann/ ---

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-31 Thread Malcolm Wallace
"Simon Marlow" <[EMAIL PROTECTED]> writes:

> Given the new evidence that it's actually rather hard to demonstrate any
> performance loss in the absence of the M-R with GHC, I'm attracted to
> the option of removing it in favour of a warning.

As another data point, today for the first time I received an error
(not a warning) from ghc about the M-R:

Ambiguous type variable `a' in the constraint:
  `Ord a' arising from use of `Data.Set.insert' at Pretty.hs:28:11-20
Possible cause: the monomorphism restriction applied to the following:
  addToSet :: a -> Data.Set.Set a -> Data.Set.Set a (bound at 
Pretty.hs:28:0)
Probable fix: give these definition(s) an explicit type signature
  or use -fno-monomorphism-restriction

So, without the M-R or a type signature, my code is OK.  The proposal
to accept this code but produce an optional warning is (I think)
better than the current error.

Regards,
Malcolm
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-31 Thread Tomasz Zielonka
[I have subscribed to the list today, so my mailbox doesn't contain the
post I should respond to (the one that started this thread). BTW, is
there a way to tell mailman to send me all previous postings?]

Concurrent Clean uses =: for "Constant Graph Definitions", which seem to
be related.

See Concurrent Clean V2.0 Language Report (Draft), section 3.6, Defining
Constants. This is page 24 in:

ftp://ftp.cs.kun.nl/pub/Clean/Clean20/doc/CleanRep2.0.pdf

Best regards
Tomasz

-- 
I am searching for programmers who are good at least in
(Haskell || ML) && (Linux || FreeBSD || math)
for work in Warsaw, Poland
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


RE: The dreaded M-R

2006-01-31 Thread Simon Marlow
On 30 January 2006 21:49, Andrew Pimlott wrote:

> On Mon, Jan 30, 2006 at 04:45:56PM -, Simon Marlow wrote:
>> Given the new evidence that it's actually rather hard to demonstrate
>> any performance loss in the absence of the M-R with GHC, I'm
>> attracted to the option of removing it in favour of a warning.
> 
> I caution against the hope that warnings will contribute to the
> solution, whichever side you're on.  This is a general argument: 
> Either the warning is on by default or off.  If off, it does no harm,
> but doesn't help much either.  If on, it either triggers only on code
> that is almost certainly wrong (or easily disambiguated), or it
> sometimes triggers on perfectly good code.  In the former case, it
> would be better to make it illegal (or require the disambiguation). 
> In the latter, nobody likes disabling warnings, so they'll grumble
> and change the code instead.
> 
> In the present case, people aren't (only) opposing the M-R out of
> principle, but because they actually use overloaded variable
> definitions and (at least sometimes) want to leave off the signature.
>
> So I don't see how one could claim, as on the wiki, the warning
> "wouldn't happen much".  I suspect it would happen, and annoy people,
> and defeat the reason that people want to remove the M-R.

The assertion that it "wouldn't happen much" is based on the observation
earlier in this thread that it was actually difficult to write some code
that illustrated the problem.

Nevertheless, I didn't mean to imply that the language would mandate a
warning, I'll change the wiki to make this more clear.

Cheers,
Simon
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


RE: The dreaded M-R

2006-01-31 Thread Simon Marlow
On 31 January 2006 00:53, Neil Mitchell wrote:

>> Second, a warning about "loss of sharing" may befuddle beginners (who
>> are usually not taught to write type signatures at the start).
> 
> Are standards documents the place for prescribing which warnings
> should be raised, and under what circumstances?

Not prescribing, no.  It would be a recommendation.  The Haskell 98
report already contains recommendations - see the section on pragmas,
for example, and I'm sure there are more examples.

Cheers,
Simon
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-30 Thread John Meacham
On Mon, Jan 30, 2006 at 10:08:04PM -0500, Lennart Augustsson wrote:
> All that said, I think not being able to give a name to a context is
> a real weakness in Haskell.  It's one of the few things that cannot
> be named, and being able to do so would help refactoring and modularity.

this is one of the things my 'class aliases' proposal was meant to
address (among other things). I hope to implement it for ghc, but have
been busy as of late so probably won't get to it soon.

John

-- 
John Meacham - ⑆repetae.net⑆john⑈ 
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-30 Thread Lennart Augustsson

John Hughes wrote:

Lennart Augustsson wrote:


On the subject of type signatures, I don't want to
make them mandatory, but I think they should be strongly
encouraged.  I don't buy the argument that they make
refactoring programs that much harder.  It's still
very easy to do, the type checker will tell you exactly
where.  :) 



It can still be in a LOT of places--far too many for comfort. I'm not 
making this
up--I've experienced severe problems in practice caused by this very 
point. It depends
what kind of code you're working with, of course. I'm not saying type 
signatures
are ALWAYS a problem for refactoring, just that they sometimes are--and 
that
it makes sense to leave it up the programmer whether or not to include 
them.


So what if it's in a lot of places?  The compiler tells you where to
change.  Each change takes a few seconds.  Even with hundreds of changes
you'd probably be done in under half an hour.

Of course, you'd like a tool to do these kind of changes.

All that said, I think not being able to give a name to a context is
a real weakness in Haskell.  It's one of the few things that cannot
be named, and being able to do so would help refactoring and modularity.

-- Lennart
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-30 Thread Lennart Augustsson

Andrew Pimlott wrote:

On Tue, Jan 31, 2006 at 12:52:51AM +, Neil Mitchell wrote:


Second, a warning about "loss of sharing" may befuddle beginners (who
are usually not taught to write type signatures at the start).


Are standards documents the place for prescribing which warnings
should be raised, and under what circumstances?

If someone is using GHC, and has specified -O2 then clearly something
that causes vastly more time is a problem. If someone is learning
Haskell and is using Hugs then they probably couldn't care less.
Perhaps some warnings should be left up to the implementation to
decide...



My ultimate point was that the possibility of a warning should carry
very little weight (if any) when analyzing the pros and cons of a
language change.  If you want to argue that a warning would mitigate a
disadvantage of a change, you need to think about when the warning would
be emitted, which I agree should be outside the scope of a standards
discussion.  So I am just suggesting that we simplify the discussion by
not talking about warnings (which suggestion I will follow as soon as I
hit send!).


I agree that a requiring a warning in the language standard is a rather
dodgy thing.  So let's say we don't have a warning.  Is this a tried
solution?  Yes, nhc does exactly that.  It does not have the M-R nor
a warning.  And I have never heard outcries about how bad nhc is because
of this.

-- Lennart
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-30 Thread Benjamin Franksen
On Tuesday 31 January 2006 01:37, Andrew Pimlott wrote:
> On Tue, Jan 31, 2006 at 12:57:18AM +0100, [EMAIL PROTECTED] 
wrote:
> > Quoting Andrew Pimlott <[EMAIL PROTECTED]>:
> > >On Mon, Jan 30, 2006 at 11:06:29PM +0100, [EMAIL PROTECTED] 
wrote:
> > >>So I envisage that you'd turn off the warning in the same way as
> > >>you turn off the M-R today: by a type signature.
> > >
> > >But if people were happy adding type signatures for every
> > > polymorphic variable definition, they wouldn't be moving to
> > > eliminate the M-R, would they?  Or do I misunderstand?
> >
> > Well, my feeling is that the M-R is an ugly wart, and I want it
> > gone. But I'm still happy to put a type signature when I want
> > something to be polymorphic.
>
> Ok, I understand your position now.  But even given this view, I
> think the warning will be problematic.  First, when will the warning
> be emitted?  For all variable assignments without signatures, or only
> for those that the implementation fails to monomorphize (as an
> optimization)?  The first is heavy-handed; 

Agreed.

> the second, obviously 
> implementation-dependent (someone using another implementation, or
> another optimization level, will get the warning and will lose
> sharing). 

And what, if I may ask, is the problem with that? I mean, I am used to 
different implementations having, sometimes drastic, differences in 
speed or memory use. Getting a warning about possible loss of 
efficiency due to loss of sharing might be /very/ helpful; and it can 
safely be ignored by a beginner, as long as it is unmistakenly flagged 
as such (i.e. an efficiency warning, nothing /wrong/ with the code). 
Furthermore, you would turn on such a warning only if you are expressly 
interested in your program's efficiency. And in this case I would be 
thankful to get as many (substantial) warnings as possible.

> Second, a warning about "loss of sharing" may befuddle 
> beginners (who are usually not taught to write type signatures at the
> start).
>
> Well, maybe when someone implements this warning, we will find out
> I'm wrong and it doesn't cause trouble.   And I agree with removing
> the M-R, with or without the warning.

I would make it a strong recommendation for implementations to provide a 
warning about loss of sharing due to 'accidental' (=inferred) 
polymorphism/overloading, whenever it /actually happens/. But I would 
demand that it is not turned on by default, so as not to unsettle the 
unsuspecting beginner.

Ben
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-30 Thread Josef Svenningsson
There has been many good arguments for and against the M-R in this thread. But the most important argument against M-R hasn't been put forward yet. Well, I think it is the most important argument anyway.Haskell is, and has always been, a non-strict language. *Not* a lazy language. Although many people on this list know this it is so easy to forget that it is worth repeating. Haskell is *not* a lazy language.
There are many evaluation strategies one can use to implement Haskell. The problem with the M-R is that it is a concern only in *some* of these evaluation strategies, most notably lazy evaluation. That makes the M-R such a strange thing to have in the specification of Haskell. If you read the motivation section which defines the M-R (
4.4.5 in Haskell98 report) the report suddenly starts to talk about how many times a certain a certain _expression_ is evaluated. But nowhere in the report is it defined how expressions should be evaluated. This makes the M-R truly butt-ugly!
Some might say: Well then, let's make Haskell lazy. All implementations are lazy anyway!No, not all implementations are lazy. And even if that were the case I think it would be a big mistake to explicitly define Haskell's evaluation order. There is a why Haskell was not specified as lazy in the first place.
But that doesn't mean that having a standardized lazy semantics for Haskell would be a bad thing. It would be a very nice thing to have an addendum to Haskell' which defined a standard lazy semantics. If it turns out that people want the M-R after all it should be placed in such an addendum, which actually talks about the evaluation order. However, I do realize that such an addendum would be quite a bit of work to put together. Perhaps after Haskell' is fixed we can start thinking about whether we have the need and the stamina to put together such a thing.
Cheers,/Josef

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-30 Thread Andrew Pimlott
On Tue, Jan 31, 2006 at 12:52:51AM +, Neil Mitchell wrote:
> > Second, a warning about "loss of sharing" may befuddle beginners (who
> > are usually not taught to write type signatures at the start).
> 
> Are standards documents the place for prescribing which warnings
> should be raised, and under what circumstances?
> 
> If someone is using GHC, and has specified -O2 then clearly something
> that causes vastly more time is a problem. If someone is learning
> Haskell and is using Hugs then they probably couldn't care less.
> Perhaps some warnings should be left up to the implementation to
> decide...

My ultimate point was that the possibility of a warning should carry
very little weight (if any) when analyzing the pros and cons of a
language change.  If you want to argue that a warning would mitigate a
disadvantage of a change, you need to think about when the warning would
be emitted, which I agree should be outside the scope of a standards
discussion.  So I am just suggesting that we simplify the discussion by
not talking about warnings (which suggestion I will follow as soon as I
hit send!).

Andrew
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-30 Thread Philippa Cowderoy
On Tue, 31 Jan 2006, Neil Mitchell wrote:

> Are standards documents the place for prescribing which warnings
> should be raised, and under what circumstances?
> 
> If someone is using GHC, and has specified -O2 then clearly something
> that causes vastly more time is a problem. If someone is learning
> Haskell and is using Hugs then they probably couldn't care less.
> Perhaps some warnings should be left up to the implementation to
> decide...
> 

That's why my preferred phrasing is "warnings available to the user" - 
that is, they don't have to be on by default but there should be an option 
to turn them on.

-- 
[EMAIL PROTECTED]

'In Ankh-Morpork even the shit have a street to itself...
 Truly this is a land of opportunity.' - Detritus, Men at Arms
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-30 Thread Philippa Cowderoy
On Mon, 30 Jan 2006, Andrew Pimlott wrote:

> Ok, I understand your position now.  But even given this view, I think
> the warning will be problematic.  First, when will the warning be
> emitted?  For all variable assignments without signatures, or only for
> those that the implementation fails to monomorphize (as an
> optimization)?

How about for those a minimal standards-compliant implementation would 
fail to retain sharing in, coupled with some requirements about sharing 
equivalent to a specified set of transforms on a dictionary-passing 
implementation?

-- 
[EMAIL PROTECTED]

'In Ankh-Morpork even the shit have a street to itself...
 Truly this is a land of opportunity.' - Detritus, Men at Arms
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-30 Thread Neil Mitchell
> Second, a warning about "loss of sharing" may befuddle beginners (who
> are usually not taught to write type signatures at the start).

Are standards documents the place for prescribing which warnings
should be raised, and under what circumstances?

If someone is using GHC, and has specified -O2 then clearly something
that causes vastly more time is a problem. If someone is learning
Haskell and is using Hugs then they probably couldn't care less.
Perhaps some warnings should be left up to the implementation to
decide...

Thanks

Neil
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-30 Thread Andrew Pimlott
On Tue, Jan 31, 2006 at 12:57:18AM +0100, [EMAIL PROTECTED] wrote:
> Quoting Andrew Pimlott <[EMAIL PROTECTED]>:
> 
> >On Mon, Jan 30, 2006 at 11:06:29PM +0100, [EMAIL PROTECTED] wrote:
> >>So I envisage that you'd turn off the warning in the same way as
> >>you turn off the M-R today: by a type signature.
> >
> >But if people were happy adding type signatures for every polymorphic
> >variable definition, they wouldn't be moving to eliminate the M-R, would
> >they?  Or do I misunderstand?
> 
> Well, my feeling is that the M-R is an ugly wart, and I want it gone.
> But I'm still happy to put a type signature when I want something to
> be polymorphic.

Ok, I understand your position now.  But even given this view, I think
the warning will be problematic.  First, when will the warning be
emitted?  For all variable assignments without signatures, or only for
those that the implementation fails to monomorphize (as an
optimization)?  The first is heavy-handed; the second, obviously
implementation-dependent (someone using another implementation, or
another optimization level, will get the warning and will lose sharing).
Second, a warning about "loss of sharing" may befuddle beginners (who
are usually not taught to write type signatures at the start).

Well, maybe when someone implements this warning, we will find out I'm
wrong and it doesn't cause trouble.   And I agree with removing the M-R,
with or without the warning.

Andrew
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-30 Thread lennart

Quoting Andrew Pimlott <[EMAIL PROTECTED]>:


On Mon, Jan 30, 2006 at 11:06:29PM +0100, [EMAIL PROTECTED] wrote:

Quoting Andrew Pimlott <[EMAIL PROTECTED]>:
>In the present case, people aren't (only) opposing the M-R out of
>principle, but because they actually use overloaded variable definitions
>and (at least sometimes) want to leave off the signature.  So I don't
>see how one could claim, as on the wiki, the warning "wouldn't happen
>much".  I suspect it would happen, and annoy people, and defeat the
>reason that people want to remove the M-R.

So I envisage that you'd turn off the warning in the same way as
you turn off the M-R today: by a type signature.


But if people were happy adding type signatures for every polymorphic
variable definition, they wouldn't be moving to eliminate the M-R, would
they?  Or do I misunderstand?



Well, my feeling is that the M-R is an ugly wart, and I want it gone.
But I'm still happy to put a type signature when I want something to
be polymorphic.

   -- Lennart

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-30 Thread Andrew Pimlott
On Mon, Jan 30, 2006 at 11:06:29PM +0100, [EMAIL PROTECTED] wrote:
> Quoting Andrew Pimlott <[EMAIL PROTECTED]>:
> >In the present case, people aren't (only) opposing the M-R out of
> >principle, but because they actually use overloaded variable definitions
> >and (at least sometimes) want to leave off the signature.  So I don't
> >see how one could claim, as on the wiki, the warning "wouldn't happen
> >much".  I suspect it would happen, and annoy people, and defeat the
> >reason that people want to remove the M-R.
> 
> So I envisage that you'd turn off the warning in the same way as
> you turn off the M-R today: by a type signature.

But if people were happy adding type signatures for every polymorphic
variable definition, they wouldn't be moving to eliminate the M-R, would
they?  Or do I misunderstand?

Andrew
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-30 Thread Taral
On 1/30/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> So I envisage that you'd turn off the warning in the same way as
> you turn off the M-R today: by a type signature.  If you write the
> type you pretty much show that you know that you're doing something
> special.

This requires scoped type variables.

--
Taral <[EMAIL PROTECTED]>
"Computer science is no more about computers than astronomy is about
telescopes."
-- Edsger Dijkstra
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-30 Thread lennart

Quoting Andrew Pimlott <[EMAIL PROTECTED]>:


On Mon, Jan 30, 2006 at 04:45:56PM -, Simon Marlow wrote:

Given the new evidence that it's actually rather hard to demonstrate any
performance loss in the absence of the M-R with GHC, I'm attracted to
the option of removing it in favour of a warning.


I caution against the hope that warnings will contribute to the
solution, whichever side you're on.  This is a general argument:  Either
the warning is on by default or off.  If off, it does no harm, but
doesn't help much either.  If on, it either triggers only on code that
is almost certainly wrong (or easily disambiguated), or it sometimes
triggers on perfectly good code.  In the former case, it would be better
to make it illegal (or require the disambiguation).  In the latter,
nobody likes disabling warnings, so they'll grumble and change the code
instead.

In the present case, people aren't (only) opposing the M-R out of
principle, but because they actually use overloaded variable definitions
and (at least sometimes) want to leave off the signature.  So I don't
see how one could claim, as on the wiki, the warning "wouldn't happen
much".  I suspect it would happen, and annoy people, and defeat the
reason that people want to remove the M-R.



So I envisage that you'd turn off the warning in the same way as
you turn off the M-R today: by a type signature.  If you write the
type you pretty much show that you know that you're doing something
special.

   -- Lennart


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-30 Thread Andrew Pimlott
On Mon, Jan 30, 2006 at 04:45:56PM -, Simon Marlow wrote:
> Given the new evidence that it's actually rather hard to demonstrate any
> performance loss in the absence of the M-R with GHC, I'm attracted to
> the option of removing it in favour of a warning.

I caution against the hope that warnings will contribute to the
solution, whichever side you're on.  This is a general argument:  Either
the warning is on by default or off.  If off, it does no harm, but
doesn't help much either.  If on, it either triggers only on code that
is almost certainly wrong (or easily disambiguated), or it sometimes
triggers on perfectly good code.  In the former case, it would be better
to make it illegal (or require the disambiguation).  In the latter,
nobody likes disabling warnings, so they'll grumble and change the code
instead.

In the present case, people aren't (only) opposing the M-R out of
principle, but because they actually use overloaded variable definitions
and (at least sometimes) want to leave off the signature.  So I don't
see how one could claim, as on the wiki, the warning "wouldn't happen
much".  I suspect it would happen, and annoy people, and defeat the
reason that people want to remove the M-R.

Andrew
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-30 Thread Sebastian Sylvan
On 1/30/06, Simon Marlow <[EMAIL PROTECTED]> wrote:
> I've put a wiki page with a summary of this discussion here:
>
> http://hackage.haskell.org/cgi-bin/haskell-prime/trac.cgi/wiki/Monomorph
> ismRestriction
>
> Hopefully I've captured most of the important points, please let me know
> if there's anything I missed, or misrepresented.  I'll add a ticket
> shortly.
>
> Given the new evidence that it's actually rather hard to demonstrate any
> performance loss in the absence of the M-R with GHC, I'm attracted to
> the option of removing it in favour of a warning.

Given that the discussion has focused a lot on how beginners would
feel about this, I'll chime in with my two cents. I may not be a
beginner in the strictest sense of the word, but I'm probably a lot
closer to it than the rest of the participants in this discussion :-)

I'm against it. People will want to *understand* the difference
between ":=" and "=", and I don't think beginners will really grok
something like that without significant difficulties. And it does add
a significant extra clutter to the language, IMO. That symbols feels
"heavy", somehow, even if it's meaning is subtle (at least from a
beginners POV).

Also, since it's only a problem very rarely, it could be noted in an
"optimization faq" somewhere. Plus, don't we already tell people to
add type signatures when something is too slow? Isn't that the first
thing you would try when something is surprisingly slow?

/S

--
Sebastian Sylvan
+46(0)736-818655
UIN: 44640862
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


RE: The dreaded M-R

2006-01-30 Thread Simon Marlow
I've put a wiki page with a summary of this discussion here:

http://hackage.haskell.org/cgi-bin/haskell-prime/trac.cgi/wiki/Monomorph
ismRestriction

Hopefully I've captured most of the important points, please let me know
if there's anything I missed, or misrepresented.  I'll add a ticket
shortly.

Given the new evidence that it's actually rather hard to demonstrate any
performance loss in the absence of the M-R with GHC, I'm attracted to
the option of removing it in favour of a warning.

Cheers,
Simon
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-30 Thread lennart

Quoting Malcolm Wallace <[EMAIL PROTECTED]>:


[EMAIL PROTECTED] writes:


Nhc didn't use to implement the M-R (maybe it does now).  When
porting code to nhc this caused a few code changes.  Perhaps
10 lines out of 1 when I tried the Bluespec compiler.  So
my gut feeling is that the M-R is a rare beast in practise.


I can confirm that nhc98 (and hence yhc) still does not implement
the M-R.  I haven't noticed any particular performance problems in
code compiled by nhc98 that were down to a consequent loss of sharing.
(But I can't claim to have explicitly looked.)  But as Lennart says,
you do get occasional type errors which mean you need to restructure
your code.  Almost always, these are local bindings, e.g.:

   Context required in LHS pattern

is an error emitted for code like:

   do (x,y) <- something complicated and numeric
  ...

and the fix is

   do xy <- something complicated and numeric
  let x = fst xy
  y = snd xy
  ...

As I recall, there were about 4-5 of these I needed to fix in the
entire nofib suite.


The changes I had to make were almost all simpler than that.
I just had to add a few type signatures.

(And thanks for supporting evidence from nhc. :)

   -- Lennart



___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-30 Thread Philippa Cowderoy
On Mon, 30 Jan 2006, John Hughes wrote:

> > Insist the warnings be available, don't require them to be in the standard
> > warning level?
> > 
> >  
> But then beginners will, without warning, sometimes find their code running
> unreasonably
> slowly. That isn't, really, any better. Too many will just conclude "Haskell
> is unreasonably
> slow" and abandon it.
> 

I can't speak for anyone else, but I certainly had a poke around the bit 
in the GHC docs on making stuff go faster (being not entirely clueless I 
found the bit about float not being specialised to be patronising - I 
happened to have audio in mind where float's appropriate and saves cache).

This particular student was also happy to stop thinking about speed so 
much for once - and with a gamedev community background I'm more 
interested in it than most, who were happy with Java at a point when it 
was still much slower than C++.

-- 
[EMAIL PROTECTED]

Ivanova is always right. 
I will listen to Ivanova. 
I will not ignore Ivanova's recomendations. 
Ivanova is God. 
And, if this ever happens again, Ivanova will personally rip your lungs out!
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-30 Thread Malcolm Wallace
[EMAIL PROTECTED] writes:

> Nhc didn't use to implement the M-R (maybe it does now).  When
> porting code to nhc this caused a few code changes.  Perhaps
> 10 lines out of 1 when I tried the Bluespec compiler.  So
> my gut feeling is that the M-R is a rare beast in practise.

I can confirm that nhc98 (and hence yhc) still does not implement
the M-R.  I haven't noticed any particular performance problems in
code compiled by nhc98 that were down to a consequent loss of sharing.
(But I can't claim to have explicitly looked.)  But as Lennart says,
you do get occasional type errors which mean you need to restructure
your code.  Almost always, these are local bindings, e.g.:

Context required in LHS pattern

is an error emitted for code like:

do (x,y) <- something complicated and numeric
   ...

and the fix is

do xy <- something complicated and numeric
   let x = fst xy
   y = snd xy
   ...

As I recall, there were about 4-5 of these I needed to fix in the
entire nofib suite.

Regards,
Malcolm
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-30 Thread John Hughes

From: Marcin 'Qrczak' Kowalczyk <[EMAIL PROTECTED]>

   John Hughes <[EMAIL PROTECTED]> writes:


By the way, you leave out a lot of type signatures too. Your Djinn
module contains 29 declarations with type signatures--and 17
without. The 17 are local, but local declarations are precisely
those where, without the M-R, a lack of sharing would be most likely
to bite.
   



Local definitions are rarely used polymorphically.

Perhaps without a type signature the definition should be polymorphic
and recomputed if it's global, and monomorphic and shared if it's local.

 


Probably this is normally what you want--but oh, how confusing it would be!
I'd like to be able to give a simple answer to the question "Is this 
binding shared

or not?", not an answer that begins with "Is there a type signature?", "What
does the type signature look like?", and "Is it local or global?"!

This would open for very confusing behaviour when refactoring code, to move
bindings between top-level and a local scope.

John


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-30 Thread lennart

John Hughes <[EMAIL PROTECTED]> writes:

By the way, you leave out a lot of type signatures too. Your Djinn
module contains 29 declarations with type signatures--and 17
without. The 17 are local, but local declarations are precisely
those where, without the M-R, a lack of sharing would be most likely
to bite.


I've never been advocating type signatures on local bindings, only
top level.  For local bindings I sometimes do it, sometimes not.

Since the scope of a local bind is, umm, local, the compiler has
a much better chance of figuring out exactly what type it should
have.  I'm counting on the compiler to do its job here.  And
it seems to work. :)  But I should see what happens with my code
if I turn off the M-R.

   -- Lennart

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-30 Thread lennart

Quoting John Hughes <[EMAIL PROTECTED]>:


Lennart Augustsson wrote:


John Hughes wrote:


An example where loss of sharing leads to exponential time:

fib n = fst (fib' n)

fib' :: (Num a, Num b) => Integer -> (a,b)
fib' 0 = (0,1)
fib' n = (snd p,fst p+snd p)
 where
   p :: (Num a, Num b) => (a,b)
   p = fib' (n-1)

It would be undesirable for adding a type signature to a function 
(such as fib'

above) to risk making its performance exponentially worse, because of
consequential loss of sharing in a binding somewhere else. Wouldn't it?



I don't agree in this case.  You are asking for something rather strange
when you put that type signature on fib'.  I also noticed that to give
a "convincing" example you picked something with polyporphic recursion.
P-R wasn't in Haskell at the time the M-R was added (if memory serves
me).  So there must be other examples.


True. I don't remember my original example.



I still think removing the M-R and issuing a warning when there's a
potential loss of sharing would be acceptable.



Wouldn't there be quite a lot of warnings?

I really think we have to remember beginners' problems here. I know 
noone reading this discussion is
one, but without beginners, there will be no growth in the Haskell 
community. If we did as you suggest,
then beginners would be confronted by yet another kind of 
incomprehensible message--and I fear it
would be hard to report that x=e risks evaluating e many times, in a 
way that would be viewed as positive

by someone still trying to understand how Haskell works at all.

John



I am thinking of the beginner.  I would hate to explain what the
difference between '=' and ':=' is and when to use one or the other.
I think that's a wart as ugly as the M-R.

As for there being many warnings, we don't really know, do we?
I'm only suggesting a warning when there's a potential loss of
sharing, not in general.  In my experience it's very rare to
have the M-R kick in at the top level; it's almost always in
local definitions.  And for local definitions you can see all uses
of a binding and handle it accordingly.  Local bindings are very
rarely used polymorphically.

Nhc didn't use to implement the M-R (maybe it does now).  When
porting code to nhc this caused a few code changes.  Perhaps
10 lines out of 1 when I tried the Bluespec compiler.  So
my gut feeling is that the M-R is a rare beast in practise.

   -- Lennart


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-30 Thread John Hughes

Philippa Cowderoy wrote:


On Mon, 30 Jan 2006, John Hughes wrote:

 


Wouldn't there be quite a lot of warnings?

I really think we have to remember beginners' problems here. I know noone
reading this discussion is
one, but without beginners, there will be no growth in the Haskell community.
If we did as you suggest,
then beginners would be confronted by yet another kind of incomprehensible
message--and I fear it
would be hard to report that x=e risks evaluating e many times, in a way that
would be viewed as positive
by someone still trying to understand how Haskell works at all.

   



Insist the warnings be available, don't require them to be in the standard 
warning level?


 

But then beginners will, without warning, sometimes find their code 
running unreasonably
slowly. That isn't, really, any better. Too many will just conclude 
"Haskell is unreasonably

slow" and abandon it.

John

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-30 Thread Philippa Cowderoy
On Mon, 30 Jan 2006, John Hughes wrote:

> Wouldn't there be quite a lot of warnings?
> 
> I really think we have to remember beginners' problems here. I know noone
> reading this discussion is
> one, but without beginners, there will be no growth in the Haskell community.
> If we did as you suggest,
> then beginners would be confronted by yet another kind of incomprehensible
> message--and I fear it
> would be hard to report that x=e risks evaluating e many times, in a way that
> would be viewed as positive
> by someone still trying to understand how Haskell works at all.
> 

Insist the warnings be available, don't require them to be in the standard 
warning level?

-- 
[EMAIL PROTECTED]

The task of the academic is not to scale great 
intellectual mountains, but to flatten them.
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-30 Thread Marcin 'Qrczak' Kowalczyk
John Hughes <[EMAIL PROTECTED]> writes:

> By the way, you leave out a lot of type signatures too. Your Djinn
> module contains 29 declarations with type signatures--and 17
> without. The 17 are local, but local declarations are precisely
> those where, without the M-R, a lack of sharing would be most likely
> to bite.

Local definitions are rarely used polymorphically.

Perhaps without a type signature the definition should be polymorphic
and recomputed if it's global, and monomorphic and shared if it's local.

This is how it works in Clean: =: defines a shared constant, => defines
a nullary function, and = is equivalent to =: locally and to => globally,
irrespective of a type signature.

-- 
   __("< Marcin Kowalczyk
   \__/   [EMAIL PROTECTED]
^^ http://qrnik.knm.org.pl/~qrczak/
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-30 Thread John Hughes

Lennart Augustsson wrote:


John Hughes wrote:


An example where loss of sharing leads to exponential time:

fib n = fst (fib' n)

fib' :: (Num a, Num b) => Integer -> (a,b)
fib' 0 = (0,1)
fib' n = (snd p,fst p+snd p)
 where
   p :: (Num a, Num b) => (a,b)
   p = fib' (n-1)

It would be undesirable for adding a type signature to a function 
(such as fib'

above) to risk making its performance exponentially worse, because of
consequential loss of sharing in a binding somewhere else. Wouldn't it?



I don't agree in this case.  You are asking for something rather strange
when you put that type signature on fib'.  I also noticed that to give
a "convincing" example you picked something with polyporphic recursion.
P-R wasn't in Haskell at the time the M-R was added (if memory serves
me).  So there must be other examples.


True. I don't remember my original example.



I still think removing the M-R and issuing a warning when there's a
potential loss of sharing would be acceptable.



Wouldn't there be quite a lot of warnings?

I really think we have to remember beginners' problems here. I know 
noone reading this discussion is
one, but without beginners, there will be no growth in the Haskell 
community. If we did as you suggest,
then beginners would be confronted by yet another kind of 
incomprehensible message--and I fear it
would be hard to report that x=e risks evaluating e many times, in a way 
that would be viewed as positive

by someone still trying to understand how Haskell works at all.

John

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-30 Thread Lennart Augustsson

John Hughes wrote:

An example where loss of sharing leads to exponential time:

fib n = fst (fib' n)

fib' :: (Num a, Num b) => Integer -> (a,b)
fib' 0 = (0,1)
fib' n = (snd p,fst p+snd p)
 where
   p :: (Num a, Num b) => (a,b)
   p = fib' (n-1)

It would be undesirable for adding a type signature to a function (such 
as fib'

above) to risk making its performance exponentially worse, because of
consequential loss of sharing in a binding somewhere else. Wouldn't it?


I don't agree in this case.  You are asking for something rather strange
when you put that type signature on fib'.  I also noticed that to give
a "convincing" example you picked something with polyporphic recursion.
P-R wasn't in Haskell at the time the M-R was added (if memory serves
me).  So there must be other examples.

I still think removing the M-R and issuing a warning when there's a
potential loss of sharing would be acceptable.

-- Lennart
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-30 Thread John Hughes

Lennart Augustsson wrote:


On the subject of type signatures, I don't want to
make them mandatory, but I think they should be strongly
encouraged.  I don't buy the argument that they make
refactoring programs that much harder.  It's still
very easy to do, the type checker will tell you exactly
where.  :) 


It can still be in a LOT of places--far too many for comfort. I'm not 
making this
up--I've experienced severe problems in practice caused by this very 
point. It depends
what kind of code you're working with, of course. I'm not saying type 
signatures

are ALWAYS a problem for refactoring, just that they sometimes are--and that
it makes sense to leave it up the programmer whether or not to include them.

By the way, you leave out a lot of type signatures too. Your Djinn 
module contains
29 declarations with type signatures--and 17 without. The 17 are local, 
but local
declarations are precisely those where, without the M-R, a lack of 
sharing would be

most likely to bite.

John
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-30 Thread John Hughes

   it would also be possible to require the compiler 'memoize' all top
   level bindings on their type parameter. then the problem goes away, but
   at the cost of hiding some machinery under the hood. however the type
   analysis pass mentioned above would often be able to discard of this
   'under the hood' memoization and.

We discussed this idea when Haskell was first designed. The trouble is,
there's an awful interaction with space leaks. 


Imagine you have a binding

let x = ...e1... in ...e2...

where e1 contains pointers to some large structure, and then somewhere in e2 
you need to drop those pointers. You write x `seq` ... at the place where you

want to drop the pointers, right? (Assuming the pointers are no longer
referred to from x's value). Wrong! If x is overloaded, then you've only
forced the evaluation of *one instance* of x, and e1, together with the 
structures it points to, have to be retained in case you ever evaluate a

*different* instance of x in the future! Memooising x on its type makes no
difference to this: you have to retain e1 until every possible instance of
x has been evaluated. If there's a fixed finite number of possibilities
then this might be possible, if awkward ((x::Integer)`seq` (x::Float)`seq`...),
but if there are infinitely many possible instances then you're screwed.

So if it is to be *possible* to fix space leaks, then memoisation is not
a good alternative to shared bindings.

John

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-30 Thread John Hughes

An example where loss of sharing leads to exponential time:

fib n = fst (fib' n)

fib' :: (Num a, Num b) => Integer -> (a,b)
fib' 0 = (0,1)
fib' n = (snd p,fst p+snd p)
 where
   p :: (Num a, Num b) => (a,b)
   p = fib' (n-1)

Here you need BOTH type signatures, since otherwise the one stated for p
is too general (needs polymorphic recursion).

Maybe it's obvious from the two type variables that sharing will be lost--
but the example below only has one, and is still exponential time under Hugs
(GHC optimises it to linear).

fib2 n = fst (fib2' n)

fib2' :: Num a => Integer -> (a,a)
fib2' 0 = (0,1)
fib2' n = (snd p,fst p+snd p)
 where
   p :: Num a => (a,a)
   p = fib2' (n-1)

In this case, provided you write the type signature on fib2' (thus 
permitting

polymorphic recursion), then the type signature I wrote on p is the one that
would be inferred, were it not for the M-R--and thus this program risks
running in exponential time. Same applies to fib' above: if you write the
type signature on fib', then but for the M-R, the type signature on p is the
one that would be inferred, and you get exponential time behaviour.

It would be undesirable for adding a type signature to a function (such 
as fib'

above) to risk making its performance exponentially worse, because of
consequential loss of sharing in a binding somewhere else. Wouldn't it?

John

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-29 Thread Cale Gibbard
On 29/01/06, Lennart Augustsson <[EMAIL PROTECTED]> wrote:
> Hmmm, maybe a warning is the best solution in general.
> Even without trying any link time resolution.
>
> Given how hard Cale had to work to reproduce it, I think
> it's a rare problem.  Maybe someone who knows the innards
> of ghc could make a quick hack that turns on the M-R and
> warns when there's actually sharing being lost.
>

Yeah, this is a good idea, I wonder how often this actually comes up.
I'd had trouble reproducing it since I accidentally gave 'b' a type
signature which wasn't sufficiently polymorphic to make the problem
occur.

Inside a module, optimisation to determine if sharing was needed is
surely possible. Having the compiler produce specialised code to
handle cases which recover sharing from inside a single module seems
practical enough to do.

Across module boundaries, if whole-program optimisation can't be
applied to restore sharing somehow, you'd need extra runtime support -
and you're right, that does seem like it may be a little heavy. How I
see it, it would basically amount to a polymorphic value becoming a
lookup in a memo map from instances at which it's been computed to
(weak pointers to?) final values, together with a fall-through to the
code which uses the dictionary to compute the value, and then inserts
a new entry into the map before returning normally.

Whether all that's really worth it, we should probably look into. How
many ordinary programs do we have sitting around that run into this
sharing problem?

Anyway, I see mandating the MR as somewhat ugly, while including it as
a possible compiler optimisation seems okay. Since a smart enough
compiler/runtime could make sure that the problems the MR solves are
nonexistent anyway, do we really want to include it in the standard?

 - Cale
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-29 Thread Lennart Augustsson

Hmmm, maybe a warning is the best solution in general.
Even without trying any link time resolution.

Given how hard Cale had to work to reproduce it, I think
it's a rare problem.  Maybe someone who knows the innards
of ghc could make a quick hack that turns on the M-R and
warns when there's actually sharing being lost.

Philippa Cowderoy wrote:

On Sun, 29 Jan 2006, Lennart Augustsson wrote:



Jacques Carette wrote:


Personally I think that this ought to be resolved by static means -- and
yes, by the linker, as it can't be done properly earlier.


But there are cases that cannot be resolved statically.
On the other hand, they are probably rare enough to ignore.




Or to flag up a compiler and/or linker warning for those who request them?



___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-29 Thread Jacques Carette

Lennart Augustsson wrote:

Jacques Carette wrote:
Personally I think that this ought to be resolved by static means -- 
and yes, by the linker, as it can't be done properly earlier.


But there are cases that cannot be resolved statically.
On the other hand, they are probably rare enough to ignore.
There will always be some things that are impossible to do statically - 
but we should still strive to do as many as possible, even if we are 
forced to be incomplete about it.  As long as the cases that can't be 
done are relatively easy to define and occur rarely, that seems like a 
good compromise to me.


Jacques
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-29 Thread John Meacham
There are a whole lot of ways to ruin the performance of haskell
programs that are a lot more subtle than the MR restriction. I don't see
why we deem this particular one, especially when it is easily solved by
adding a type signature, of special enough import to change the language
over.


However, If we must have a new syntax then another possibility is that since 
pattern
matching is already monomorhpic, we can have the 'unary tuple' pattern mean
monomorphic without using any new syntax.

(x) = fromInteger 3

which sort of looks like a unary tuple match and makes it very clear
that only values (and not functions) may be bound in this manner and
the fact that it is shared falls naturally from thinking of it as being
pulled out of a data structure.

I would much prefer something like this as opposed to a new operator. we
are used to = behaving differently when pattern matching vs naming.

John
-- 
John Meacham - ⑆repetae.net⑆john⑈ 
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-29 Thread Philippa Cowderoy
On Sun, 29 Jan 2006, Lennart Augustsson wrote:

> Jacques Carette wrote:
> > Personally I think that this ought to be resolved by static means -- and
> > yes, by the linker, as it can't be done properly earlier.
> 
> But there are cases that cannot be resolved statically.
> On the other hand, they are probably rare enough to ignore.
> 

Or to flag up a compiler and/or linker warning for those who request them?

-- 
[EMAIL PROTECTED]

Sometimes you gotta fight fire with fire. Most 
of the time you just get burnt worse though.
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-29 Thread Lennart Augustsson

Jacques Carette wrote:
Personally I think that this ought to be resolved by static means -- and 
yes, by the linker, as it can't be done properly earlier.


But there are cases that cannot be resolved statically.
On the other hand, they are probably rare enough to ignore.

-- Lennart


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-29 Thread Jacques Carette

Cale Gibbard wrote:

Hmm... however, could we not assign to each instance a unique
identifier which could be compared? Say a cryptographic hash of the
source code for the instance? (Which of course would never be exposed
to the user.) That should be enough to tell them apart.
By the way, this is the method used by Computer Algebra Systems.  
However, it is really unclear if the cost is worth it, in memory as well 
as in CPU time.  When you are doing a lot of "expression manipulation", 
this seems worth it, but for more general program execution, it seems 
not to be.


Personally I think that this ought to be resolved by static means -- and 
yes, by the linker, as it can't be done properly earlier.


Jacques
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-29 Thread Lennart Augustsson

Yes, you can do something to regain sharing.  That's what I meant
by having some clever runtime machinery.  But it's rather complex
for what the problem at hand is.

-- Lennart

Cale Gibbard wrote:

Aha, okay. Yeah, I can reproduce that now, and it makes good sense
what's going on. It is in fact quite sensible that x be evaluated
twice with that sort of polymorphism.

Hmm... however, could we not assign to each instance a unique
identifier which could be compared? Say a cryptographic hash of the
source code for the instance? (Which of course would never be exposed
to the user.) That should be enough to tell them apart. The
translation would then partition the work according to the incoming
instances, and share all computation possible.

 - Cale

On 28/01/06, Lennart Augustsson <[EMAIL PROTECTED]> wrote:


Oh, I guess I did one more change.  I put b in a separate module.

Your type signature isn't the most general, the most general is
b :: (Num a, Num b) => (a, b)
And that is the source of the problem.  You need to pass two
dictionaries.  To keep sharing you'd need some very clever
runtime machinery to find that the dictionaries are the same.

   -- Lennart

Cale Gibbard wrote:


On 28/01/06, Lennart Augustsson <[EMAIL PROTECTED]> wrote:



Remove the type signature for b and you will see the
loss of sharing.



Nope, still not seeing it with either profiling or Debug.Trace. Also
-- the type signature I gave was polymorphic, so what's the deal? If
adding a polymorphic type signature fixes the problem, and a
polymorphic type signature can be inferred, why not simply treat the
source as if one had been written there?




It mostly hurts people like John Hughes that don't
have the energy to put in type signatures. ;)



Well, sure. I don't think that we should see exponential blowup in
complexity of some programs by leaving out type signatures (though if
it was only in sufficiently rare cases, I could put up with it).




On the subject of type signatures, I don't want to
make them mandatory, but I think they should be strongly
encouraged.  I don't buy the argument that they make
refactoring programs that much harder.  It's still
very easy to do, the type checker will tell you exactly
where. :)



Me too. It's nice to be able to write quick programs where you leave
out the type signatures, but including them is always good for real
code. I also think that type signatures (and the type system in
general), makes it much easier to refactor code and work on code with
which you're unfamiliar.

- Cale








___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-28 Thread Cale Gibbard
Aha, okay. Yeah, I can reproduce that now, and it makes good sense
what's going on. It is in fact quite sensible that x be evaluated
twice with that sort of polymorphism.

Hmm... however, could we not assign to each instance a unique
identifier which could be compared? Say a cryptographic hash of the
source code for the instance? (Which of course would never be exposed
to the user.) That should be enough to tell them apart. The
translation would then partition the work according to the incoming
instances, and share all computation possible.

 - Cale

On 28/01/06, Lennart Augustsson <[EMAIL PROTECTED]> wrote:
> Oh, I guess I did one more change.  I put b in a separate module.
>
> Your type signature isn't the most general, the most general is
> b :: (Num a, Num b) => (a, b)
> And that is the source of the problem.  You need to pass two
> dictionaries.  To keep sharing you'd need some very clever
> runtime machinery to find that the dictionaries are the same.
>
> -- Lennart
>
> Cale Gibbard wrote:
> > On 28/01/06, Lennart Augustsson <[EMAIL PROTECTED]> wrote:
> >
> >>Remove the type signature for b and you will see the
> >>loss of sharing.
> >
> >
> > Nope, still not seeing it with either profiling or Debug.Trace. Also
> > -- the type signature I gave was polymorphic, so what's the deal? If
> > adding a polymorphic type signature fixes the problem, and a
> > polymorphic type signature can be inferred, why not simply treat the
> > source as if one had been written there?
> >
> >
> >>It mostly hurts people like John Hughes that don't
> >>have the energy to put in type signatures. ;)
> >
> >
> > Well, sure. I don't think that we should see exponential blowup in
> > complexity of some programs by leaving out type signatures (though if
> > it was only in sufficiently rare cases, I could put up with it).
> >
> >
> >>On the subject of type signatures, I don't want to
> >>make them mandatory, but I think they should be strongly
> >>encouraged.  I don't buy the argument that they make
> >>refactoring programs that much harder.  It's still
> >>very easy to do, the type checker will tell you exactly
> >>where. :)
> >
> >
> > Me too. It's nice to be able to write quick programs where you leave
> > out the type signatures, but including them is always good for real
> > code. I also think that type signatures (and the type system in
> > general), makes it much easier to refactor code and work on code with
> > which you're unfamiliar.
> >
> >  - Cale
> >
>
>
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-28 Thread Lennart Augustsson

Oh, I guess I did one more change.  I put b in a separate module.

Your type signature isn't the most general, the most general is
b :: (Num a, Num b) => (a, b)
And that is the source of the problem.  You need to pass two
dictionaries.  To keep sharing you'd need some very clever
runtime machinery to find that the dictionaries are the same.

-- Lennart

Cale Gibbard wrote:

On 28/01/06, Lennart Augustsson <[EMAIL PROTECTED]> wrote:


Remove the type signature for b and you will see the
loss of sharing.



Nope, still not seeing it with either profiling or Debug.Trace. Also
-- the type signature I gave was polymorphic, so what's the deal? If
adding a polymorphic type signature fixes the problem, and a
polymorphic type signature can be inferred, why not simply treat the
source as if one had been written there?



It mostly hurts people like John Hughes that don't
have the energy to put in type signatures. ;)



Well, sure. I don't think that we should see exponential blowup in
complexity of some programs by leaving out type signatures (though if
it was only in sufficiently rare cases, I could put up with it).



On the subject of type signatures, I don't want to
make them mandatory, but I think they should be strongly
encouraged.  I don't buy the argument that they make
refactoring programs that much harder.  It's still
very easy to do, the type checker will tell you exactly
where. :)



Me too. It's nice to be able to write quick programs where you leave
out the type signatures, but including them is always good for real
code. I also think that type signatures (and the type system in
general), makes it much easier to refactor code and work on code with
which you're unfamiliar.

 - Cale



___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-28 Thread Cale Gibbard
On 28/01/06, Lennart Augustsson <[EMAIL PROTECTED]> wrote:
> Remove the type signature for b and you will see the
> loss of sharing.

Nope, still not seeing it with either profiling or Debug.Trace. Also
-- the type signature I gave was polymorphic, so what's the deal? If
adding a polymorphic type signature fixes the problem, and a
polymorphic type signature can be inferred, why not simply treat the
source as if one had been written there?

> It mostly hurts people like John Hughes that don't
> have the energy to put in type signatures. ;)

Well, sure. I don't think that we should see exponential blowup in
complexity of some programs by leaving out type signatures (though if
it was only in sufficiently rare cases, I could put up with it).

> On the subject of type signatures, I don't want to
> make them mandatory, but I think they should be strongly
> encouraged.  I don't buy the argument that they make
> refactoring programs that much harder.  It's still
> very easy to do, the type checker will tell you exactly
> where. :)

Me too. It's nice to be able to write quick programs where you leave
out the type signatures, but including them is always good for real
code. I also think that type signatures (and the type system in
general), makes it much easier to refactor code and work on code with
which you're unfamiliar.

 - Cale
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-28 Thread Lennart Augustsson

Remove the type signature for b and you will see the
loss of sharing.

It mostly hurts people like John Hughes that don't
have the energy to put in type signatures. ;)

On the subject of type signatures, I don't want to
make them mandatory, but I think they should be strongly
encouraged.  I don't buy the argument that they make
refactoring programs that much harder.  It's still
very easy to do, the type checker will tell you exactly
where. :)

-- Lennart



Cale Gibbard wrote:

On 28/01/06, Taral <[EMAIL PROTECTED]> wrote:


On 1/28/06, Cale Gibbard <[EMAIL PROTECTED]> wrote:


Do you have an example of such a program handy?


b = (x, x) where { x :: Num a => a; x = fromInteger 1 }

fromInteger is called twice.




--- mr.hs ---
{-# OPTIONS_GHC -fno-monomorphism-restriction #-}

import Debug.Trace

b :: Num a => (a,a)
b = (x,x)
where x :: Num a => a
  x = (trace "x" . fromInteger) 1

main = print b

[EMAIL PROTECTED] ghci -fno-monomorphism-restriction mr.hs
Loading package base-1.0 ... linking ... done.
Compiling Main ( mr.hs, interpreted )
Ok, modules loaded: Main.
*Main> :t b
b :: (Num a) => (a, a)
*Main> b
(x
1,1)

[EMAIL PROTECTED] ghc -fno-monomorphism-restriction -o mr mr.hs && ./mr
x
(1,1)
---

If x isn't being shared, then Debug.Trace at least seems incapable of
resolving that fact.

Let's try profiling:

--- mr.hs, revised -
{-# OPTIONS_GHC -fno-monomorphism-restriction #-}

b :: Num a => (a,a)
b = (x,x)
where x :: Num a => a
  x = {-# SCC "x" #-} fromInteger 1

main = print b
--
[EMAIL PROTECTED] ghc -fno-monomorphism-restriction -prof -auto-all -o
mr mr.hs && ./mr +RTS -p
(1,1)
[EMAIL PROTECTED] cat mr.prof
Sat Jan 28 18:21 2006 Time and Allocation Profiling Report  (Final)

   mr +RTS -p -RTS

total time  =0.00 secs   (0 ticks @ 20 ms)
total alloc =  17,972 bytes  (excludes profiling overheads)

COST CENTREMODULE   %time %alloc

CAFGHC.Handle 0.0   48.2
CAFSystem.IO  0.01.4
CAFMain   0.0   50.3


  
individualinherited
COST CENTRE  MODULE   
   no.entries  %time %alloc   %time %alloc


MAIN MAIN 
 1   0   0.00.0 0.0  100.0
 CAF Main 
   150   6   0.0   50.3 0.0   50.5
  b  Main 
   157   1   0.00.1 0.00.2
   x Main 
   158   1   0.00.1 0.00.1
  main   Main 
   156   1   0.00.0 0.00.0
 CAF System.IO
   105   1   0.01.4 0.01.4
 CAF GHC.Handle   
   103   3   0.0   48.2 0.0   48.2


---

One entry to x. So where is this lack of sharing I keep hearing about?
Even if I repeat these tests with -fno-cse, the results are the same.

 - Cale
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime



___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-28 Thread Cale Gibbard
On 28/01/06, Taral <[EMAIL PROTECTED]> wrote:
> On 1/28/06, Cale Gibbard <[EMAIL PROTECTED]> wrote:
> > Do you have an example of such a program handy?
>
> b = (x, x) where { x :: Num a => a; x = fromInteger 1 }
>
> fromInteger is called twice.
>

--- mr.hs ---
{-# OPTIONS_GHC -fno-monomorphism-restriction #-}

import Debug.Trace

b :: Num a => (a,a)
b = (x,x)
where x :: Num a => a
  x = (trace "x" . fromInteger) 1

main = print b

[EMAIL PROTECTED] ghci -fno-monomorphism-restriction mr.hs
Loading package base-1.0 ... linking ... done.
Compiling Main ( mr.hs, interpreted )
Ok, modules loaded: Main.
*Main> :t b
b :: (Num a) => (a, a)
*Main> b
(x
1,1)

[EMAIL PROTECTED] ghc -fno-monomorphism-restriction -o mr mr.hs && ./mr
x
(1,1)
---

If x isn't being shared, then Debug.Trace at least seems incapable of
resolving that fact.

Let's try profiling:

--- mr.hs, revised -
{-# OPTIONS_GHC -fno-monomorphism-restriction #-}

b :: Num a => (a,a)
b = (x,x)
where x :: Num a => a
  x = {-# SCC "x" #-} fromInteger 1

main = print b
--
[EMAIL PROTECTED] ghc -fno-monomorphism-restriction -prof -auto-all -o
mr mr.hs && ./mr +RTS -p
(1,1)
[EMAIL PROTECTED] cat mr.prof
Sat Jan 28 18:21 2006 Time and Allocation Profiling Report  (Final)

   mr +RTS -p -RTS

total time  =0.00 secs   (0 ticks @ 20 ms)
total alloc =  17,972 bytes  (excludes profiling overheads)

COST CENTREMODULE   %time %alloc

CAFGHC.Handle 0.0   48.2
CAFSystem.IO  0.01.4
CAFMain   0.0   50.3


  
individualinherited
COST CENTRE  MODULE   
   no.entries  %time %alloc   %time %alloc

MAIN MAIN 
 1   0   0.00.0 0.0  100.0
 CAF Main 
   150   6   0.0   50.3 0.0   50.5
  b  Main 
   157   1   0.00.1 0.00.2
   x Main 
   158   1   0.00.1 0.00.1
  main   Main 
   156   1   0.00.0 0.00.0
 CAF System.IO
   105   1   0.01.4 0.01.4
 CAF GHC.Handle   
   103   3   0.0   48.2 0.0   48.2

---

One entry to x. So where is this lack of sharing I keep hearing about?
Even if I repeat these tests with -fno-cse, the results are the same.

 - Cale
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-28 Thread Lennart Augustsson

John Meacham wrote:

An optimization that jhc implements which should be portable to other
compilers is the 'type analysis' pass. it does a fixpoint iteration to
determine every possible type every polymorhpic value is called at and
then goes through and specializes all that are called at exactly one
type, which is a surprisingly large amount. This is tied to the
monomorphism restriction in that given foo :: Num a . a which is only
used as an Int, should I turn it into a caf or give it a phantom
argument to prevent sharing? I have not made up my mind on the issue...


I think most compilers have had this kind of option for quite a while.
But how well works depends on the scope of it.  With separate
compilation (that really generates code) you can't really do it
very well.

Another (small) complication is that your the fixpoint iteration
doesn't necessarily terminate.  So you need a backup strategy.

-- Lennart
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-28 Thread John Meacham
On Sat, Jan 28, 2006 at 03:41:53PM -0500, Cale Gibbard wrote:
> Is it really so impossible to implement typeclasses in a way which
> avoids this problem altogether? Perhaps the need for the MR in the
> first place is simply an indication that dictionary passing is not a
> complete solution to implementing typeclasses. It seems quite
> plausible that there should be an implementation which preserves
> polymorphism, and yet doesn't result in horrific inefficiency. Do we
> have a proof (or at least strong evidence) that such a thing can't
> exist?

interestingly enough, the monomorphism restriction in jhc actually
should apply to all polymorphic values, independently of the type class
system.

x :: a 
x = x 

will transform into something that takes a type parameter and is hence
not shared. I doubt this will cause a problem in practice since there
arn't really any useful values of type forall a . a other than bottom.
type classes don't introduce anything new at runtime that normal
polymorphism didn't already require.


An optimization that jhc implements which should be portable to other
compilers is the 'type analysis' pass. it does a fixpoint iteration to
determine every possible type every polymorhpic value is called at and
then goes through and specializes all that are called at exactly one
type, which is a surprisingly large amount. This is tied to the
monomorphism restriction in that given foo :: Num a . a which is only
used as an Int, should I turn it into a caf or give it a phantom
argument to prevent sharing? I have not made up my mind on the issue...


it would also be possible to require the compiler 'memoize' all top
level bindings on their type parameter. then the problem goes away, but
at the cost of hiding some machinery under the hood. however the type
analysis pass mentioned above would often be able to discard of this
'under the hood' memoization and.


John


-- 
John Meacham - ⑆repetae.net⑆john⑈ 
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-28 Thread John Meacham
On Fri, Jan 27, 2006 at 06:05:56PM -0500, Robert Dockins wrote:
> One aspect of this discussion I've yet to see that I think is  
> important is, how do the various proposals for removal/modifications  
> of M-R impact implicit parameters?  Are implicit parameters likely to  
> be in Haskell'?  It seems like the proposal to default to polymorphic  
> binding and have special syntax for monomorphic binding also fixes  
> one of the major probems now with implicit parameters.

I strongly doubt it. implicit parameters are quite broken semantically
and there are better solutions to the problem they are intended to
solve which they never solved that well to begin with.
John

-- 
John Meacham - ⑆repetae.net⑆john⑈ 
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-28 Thread Taral
On 1/28/06, Cale Gibbard <[EMAIL PROTECTED]> wrote:
> Do you have an example of such a program handy?

b = (x, x) where { x :: Num a => a; x = fromInteger 1 }

fromInteger is called twice.

--
Taral <[EMAIL PROTECTED]>
"Computer science is no more about computers than astronomy is about
telescopes."
-- Edsger Dijkstra
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-28 Thread Cale Gibbard
On 26/01/06, John Hughes <[EMAIL PROTECTED]> wrote:
> I had promised myself not to propose anything that was not already
> implemented,
> tried, and tested, but I just can't resist bringing up a proposal I've
> made in the
> past to fix the monomorphism restriction. Maybe now is the time to do so.
>
> I know many will proclaim "just get rid of it", but consider this:
> without the
> M-R, some programs can run exponentially slower than you expect. This
> actually happened to me, which is how we discovered something of the sort
> was needed in the first place. But the current design is surely a wart.

Do you have an example of such a program handy? Perhaps I'm just doing
something stupid, but I can't seem to replicate the lack of sharing
that's supposed to happen with the MR turned off in GHCi, so it's hard
to play around with doing various translations by hand and seeing the
results. Perhaps the optimiser is getting to the code and commoning
things up? If this is the case, can't we just show that it always does
so, or write that commoning into the translation, as suggested by
Philippa, and forget about it?

 - Cale
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-28 Thread Cale Gibbard
Is it really so impossible to implement typeclasses in a way which
avoids this problem altogether? Perhaps the need for the MR in the
first place is simply an indication that dictionary passing is not a
complete solution to implementing typeclasses. It seems quite
plausible that there should be an implementation which preserves
polymorphism, and yet doesn't result in horrific inefficiency. Do we
have a proof (or at least strong evidence) that such a thing can't
exist?

I'm going to think about this some more. I really find a lot of these
solutions to be heavy handed, as you really want things to be
polymorphic by default, and let the programmer or various compiler
optimisations decide when they're only used monomorphically. A special
keyword for killing polymorphic binding might work, but what's so
wrong with just adding a type signature?

 - Cale

 -- sorry if anyone gets two copies of this, I thought I sent it, but
later received notification that it wasn't sent.
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-27 Thread Benjamin Franksen
On Saturday 28 January 2006 01:13, Twan van Laarhoven wrote:
> Benjamin Franksen wrote:
>  > My personal opinion is that it should be exactly the other way
>  > around:
>  >
>  > All normal bindings (i.e. using '=') should be as polymorphic and
>  > general as possible.
>
> Do you mean *all* bindings, 

Yes.

> or only top-level ones? If you really 
> mean
>
> all, wouldn't e be polymorphic (with type Num a=>a) in, say:
>  > f x = e + e
>  >where e = very_expensive_polymorphic_function x
>
> That would be a Very Bad Thing.

Why? The compiler /might/ be able to decide that 'f' is in fact used 
only monomorphically and thus 'e' can be shared. Even if 'f' is 
exported (in which case I would most probably write a signature at one 
time), a compiler that does whole-program analysis can find out. It 
could even specialize 'f' for each type at which it is used, each 
specialized version being monomorphic, internally, so that 'e' can be 
'maximally shared'.

If all else fails, you can always change it to := (or whatever other 
symbol will be agreed upon) if you want to indicate to the 
compiler/interpreter: "I'd rather get an error message if I use this 
polymorphically/overloaded/whatever. This 'e' must be shared at all 
costs!".

Cheers,
Ben
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-27 Thread Philippa Cowderoy
On Sat, 28 Jan 2006, Twan van Laarhoven wrote:

> Benjamin Franksen wrote:
> 
> > My personal opinion is that it should be exactly the other way around:
> >
> > All normal bindings (i.e. using '=') should be as polymorphic and
> > general as possible.
> 
> Do you mean *all* bindings, or only top-level ones? If you really mean all,
> wouldn't e be polymorphic (with type Num a=>a) in, say:
> > f x = e + e
> >where e = very_expensive_polymorphic_function x
> 
> That would be a Very Bad Thing.
> 

Is there any reason in that particular case that the dictionary transform 
can't produce something like this:

f x d = e' + e'
  where e d = very_expensive_polymorphic_function x d
e' = e d

? What's the pathological case that prevents this applying more generally?

-- 
[EMAIL PROTECTED]

Performance anxiety leads to premature optimisation
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-27 Thread Twan van Laarhoven

Benjamin Franksen wrote:

> My personal opinion is that it should be exactly the other way around:
>
> All normal bindings (i.e. using '=') should be as polymorphic and
> general as possible.

Do you mean *all* bindings, or only top-level ones? If you really mean 
all, wouldn't e be polymorphic (with type Num a=>a) in, say:

> f x = e + e
>where e = very_expensive_polymorphic_function x

That would be a Very Bad Thing.

Twan
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-27 Thread Robert Dockins
One aspect of this discussion I've yet to see that I think is  
important is, how do the various proposals for removal/modifications  
of M-R impact implicit parameters?  Are implicit parameters likely to  
be in Haskell'?  It seems like the proposal to default to polymorphic  
binding and have special syntax for monomorphic binding also fixes  
one of the major probems now with implicit parameters.


Rob Dockins
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-27 Thread Andrew Pimlott
On Fri, Jan 27, 2006 at 07:04:43PM +0100, Benjamin Franksen wrote:
> All normal bindings (i.e. using '=') should be as polymorphic and 
> general as possible.

I agree with the position in the Ben's email.  Also, especially since
much of the discussion has considered the impact on beginners, I want to
recall that the monomorphism restriction can cause highly confusing
error messages.  They do not necessarily mention "probably cause:
monomorphism restriction", and one cannot always find the problem by
looking at the error location and the definition.  I have a feeling
there should be a better example of this issue floating around, but not
finding one I'll submit my own:

bar :: IO Char
bar = foo

foo = return 'a'

baz = "hello " ++ foo

ghc 6.4.1 gives the error

try.hs:2:6:
Couldn't match `IO' against `[]'
  Expected type: IO Char
  Inferred type: [Char]
In the definition of `bar': bar = foo

If you imagine that baz is buried at the bottom of the module in some
test code, you might see how this could be mystifying.  After all,
there's no list mentioned in foo or bar.  (But perhaps the error message
could be improved to mention the monomorphism restriction?)

My feeling is that when a variable definition should be monomorphic (and
thus shared), the programmer ought to give the monomorphic type
explicitly, rather than count on some use of the variable (possibly far
away in the code) to decide it implicitly.

Andrew
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-27 Thread Taral
On 1/27/06, Benjamin Franksen <[EMAIL PROTECTED]> wrote:
> All normal bindings (i.e. using '=') should be as polymorphic and
> general as possible. An alternative symbol (':=') should be available,

I don't want to rain on any parade, but just let me point out that
under the current grammar, := is a constructor.

--
Taral <[EMAIL PROTECTED]>
"Computer science is no more about computers than astronomy is about
telescopes."
-- Edsger Dijkstra
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-27 Thread Benjamin Franksen
On Friday 27 January 2006 10:51, Simon Marlow wrote:
> How about an even simpler solution:
>
>   *All* pattern and variable bindings are monomorphic unless a type
>   signature is given.

My personal opinion is that it should be exactly the other way around:

All normal bindings (i.e. using '=') should be as polymorphic and 
general as possible. An alternative symbol (':=') should be available, 
but strictly as a means to optimize binding of monomorphic ground 
values (that are then /guaranteed/ to be computed only once). I think 
that the language should /never/ force the programmer to consider 
performance issues from the outset. I see ':=' as something similar to 
SPECIALIZE or INLINE pragmas -- such things should never be mandatory 
to get a program to work. Thus, I think, for all programs that are 
accepted by the compiler, replacing all ':=' by '=' should result in a 
program that gets accepted as well, it should be semantically 
equivalent (apart from possibly being slower or consuming more memory).

Furthermore, in Haskell there are /thousands/ of other possibilities to 
ruin the efficiency of your program, most of them a lot less obvious 
and in fact very hard to optimize away. For instance, compared to all 
those subtle too-much-laziness performance problems, this one could be 
easily diagnosed by profiling and also easily fixed by introducing the 
occasional ':='. Also, the compiler could help by issuing warnings (but 
only if prompted to do so by some compiler flag).

On the other hand, forcing the programmer to write signatures in order 
to get the most general type of a bound variable -- when this would be 
avoidable in principle -- is a very bad idea, IMO. Type inference is an 
extremely valuable feature when prototyping and experimenting. It is 
also of great help for people who want to explore some of more advanced 
aspects of Haskell's type system. Asking ghci or hugs to give me the 
inferred type of a complicated function definition has often given me 
the decisive insight about how to write a correct version.

Just my (hopefully not too uninformed) 2 cents.

Ben
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


RE: The dreaded M-R

2006-01-27 Thread Simon Marlow
On 27 January 2006 10:54, Simon Peyton-Jones wrote:

>>> How about an even simpler solution:
>>> 
>>>  *All* pattern and variable bindings are monomorphic unless a type 
>>> signature is given. 
>>> 
> 
> 
>> Now that IS an interesting idea. The more I think about it, the more
>> I like it. 
> 
> I like it too.  But be aware that a top-level definition
> 
>   reverse = foldr (\x ys -> ys ++ [x]) []
> 
> would get the inferred type
> 
>   [()] -> [()]
> 
> because it'd be monomorphic, and completely un constrained type
> variables are defaulted to (), in GHC at least. 

This only becomes an issue if the binder is exported, because defaulting
doesn't happen until the complete module has been typechecked.  So in
that case, probably the least surprising behaviour is for it to be an
static error to export a binding with unresolved type variables, after
defaulting has been applied.

[ oops, hit send by mistake.. ] 

I was just going to point out that something similar happens in GHCi:

Prelude> f <- return reverse
Prelude> :t f
f :: [()] -> [()]

Cheers,
Simon
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


RE: The dreaded M-R

2006-01-27 Thread Simon Marlow
On 27 January 2006 10:54, Simon Peyton-Jones wrote:

>>> How about an even simpler solution:
>>> 
>>>  *All* pattern and variable bindings are monomorphic unless a type 
>>> signature is given. 
>>> 
> 
> 
>> Now that IS an interesting idea. The more I think about it, the more
>> I like it. 
> 
> I like it too.  But be aware that a top-level definition
> 
>   reverse = foldr (\x ys -> ys ++ [x]) []
> 
> would get the inferred type
> 
>   [()] -> [()]
> 
> because it'd be monomorphic, and completely un constrained type
> variables are defaulted to (), in GHC at least. 

This only becomes an issue if the binder is exported, because defaulting
doesn't happen until the complete module has been typechecked.  So in
that case, probably the least surprising behaviour is for it to be an
static error to export a binding with unresolved type variables, after
defaulting has been applied.

incedentally, that's the type you get in GHCi
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


RE: The dreaded M-R

2006-01-27 Thread Simon Peyton-Jones

| >How about an even simpler solution:
| >
| >  *All* pattern and variable bindings are monomorphic unless a type
| >  signature is given.
| >


| Now that IS an interesting idea. The more I think about it, the more I
like it.

I like it too.  But be aware that a top-level definition 

reverse = foldr (\x ys -> ys ++ [x]) []

would get the inferred type

[()] -> [()]

because it'd be monomorphic, and completely un constrained type
variables are defaulted to (), in GHC at least.

Simon
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-27 Thread Johannes Waldmann
John Hughes wrote:

> Actually, I find the need to specify type signatures to get overloading
> awkward already [...]
> I don't like needing to maintain such type signatures when I change code
> elsewhere. [...]
> However, this is reasonably a separate problem, [...]

indeed. I use "constraint collection classes" (without methods) as in:
http://141.57.11.163/cgi-bin/cvsweb/lib/Autolib/NFA/Data.hs.drift?rev=1.15
look for class NFAC (it collects the constraints for using type NFA).
This allows to write short contexts at the call sites,
and make local changes at the definition side.

Best regards,
-- 
-- Johannes Waldmann -- Tel/Fax (0341) 3076 6479/80 --
 http://www.imn.htwk-leipzig.de/~waldmann/ ---

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-27 Thread John Hughes

Simon Marlow wrote:


How about an even simpler solution:

 *All* pattern and variable bindings are monomorphic unless a type
 signature is given.

I wonder how much code this would break?  ... I'd be very
interested to tweak this in GHC and see how much code still passes the
type checker.

 



Now that IS an interesting idea. The more I think about it, the more I 
like it. I suspect
very few programs would break, because in many cases such a definition 
is not only

polymorphic but also overloaded, and so already carries a type signature.

Actually, I find the need to specify type signatures to get overloading 
awkward already
--but I don't think needing to do so for purely polymorphic variable 
bindings would be
significantly more awkward. I find the right context is often quite hard 
to predict, and
I want to use Hugs or GHCi to compute it rather than figure it out 
myself. Moreover,
I don't like needing to maintain such type signatures when I change code 
elsewhere.
(For example, if I've used association lists as look-up tables, then 
I'll have Eq constraints
on many definitions, and if I then change that to ordered binary trees 
then I suddenly need
to change all those Eqs to Ords. I've known cases where the work needed 
to do that
maintentance was so great that the desirable change to the program could 
not be made
within the time available for the project.) However, this is reasonably 
a separate problem,
which I don't think is made significantly worse by your idea. Perhaps 
it'll be solved
(e.g. by partial type signatures), perhaps not, but in either case I 
like your suggestion.


One more thought. Perhaps we could indicate polymorphism/monomorphism 
separately
from writing a type signature. Suppose, when we want a polymorphic or 
overloaded

variable definition, we were to write

   poly x = e

Likewise, when we want a monomorphic function definition, we could write

   mono f x y z = e

Then we could concisely indicate our intention, while leaving open the 
possibility of

using Hugs or GHCi to compute type signatures, or leaving them out to reduce
that part of the maintentance effort. Arguably, writing

   poly x = e

gives a clearer warning that something funny is going on--i.e. there is 
a risk of repeated
computation--than writing a type signature which you then have to 
examine, to see
whether or not it contains a class constraint. But of course, this idea 
would mean more
changes to existing code, adding a poly before variable definitions that 
currently carry
an overloaded type signature. Although since it would be an error to 
write such a type
signature on a definition NOT marked as poly, finding the right places 
to add it would

simply be a matter of recompiling and fixing the errors found.

The more I think about it--and recalling that a type signature need not 
be written anywhere
near the definition it refers to--the more I think using type signatures 
to carry vital

information about the *semantics* of a definition is a bad idea.

John


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


RE: The dreaded M-R

2006-01-27 Thread Simon Marlow
On 26 January 2006 16:07, John Hughes wrote:

> Simon Marlow wrote:
>
>> I wonder if there's an alternative solution along these lines:
>> 
>>  - We use ParialTypeSignatures to make bindings monomorphic:
>> 
>> 
>>
http://haskell.galois.com/cgi-bin/haskell-prime/trac.cgi/wiki/PartialTyp
>> eSigs 
>> 
>>eg.
>> 
>>  x :: _
>>  x = (+1)
>> 
>>  - we make it a static error for a variable bound by a simple pattern
>>binding ("x = e") to be overloaded, unless a type signature is
>>given. The error message would explain the problem, and how to
>>fix it. Alternatively, we make it a strong warning.
>> 
>> It seems to me that the partial type signatures extension provides a
>> lot of bang for the buck - it gives us a way out of the MR in
>> addition to partial type signatures. 
>> 
> I don't like this. Once students start dropping type signatures (which
> they do
> pretty soon for local variables in where-clauses), they would
> sometimes-- unpredictably as far as they're concerned--get an error
> message telling them they must put one back in again, but it's enough
> to write x :: _. Can 
> you imagine
> explaining to an average student in the first year why they MUST put
> in a type signature, but it doesn't need to include a type???

Understood, but what about when the student writes '=' instead of ':='
by mistake - most of the time it'll work!  But occasionally they fall
foul of the reason we had the MR in the first place.  Presumably the
compiler should emit a noisy warning, but continue anyway?  Won't that
be confusing?

How about an even simpler solution:

  *All* pattern and variable bindings are monomorphic unless a type
  signature is given.

I wonder how much code this would break?  It can't be too bad, because
John is suggesting using := for variable bindings everywhere as a
starting point.  Also, SML users live with something very similar.  How
often do we really need polymorphism in a variable or pattern binding?
I'm guessing probably not that often, because by far the most common use
of polymorphism is in functions.  You lose if you write

  mymap = map

or
  
  mynil = []

and then try to use those at more than one type, but I'm guessing that's
quite rare, and you can still give a type signature.  I'd be very
interested to tweak this in GHC and see how much code still passes the
type checker.

Cheers,
Simon
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-26 Thread Ganesh Sittampalam

On Thu, 26 Jan 2006, John Hughes wrote:

(Some object that := "means" assignment--but come on, 
we're not reserving := for future use as assignment in Haskell, are we? 
Why should we give up a perfectly good symbol because it's used 
elsewhere to mean something else?).


Programmers unfamiliar with Haskell but familiar with general 
programming ideas would be confused by it. I think this is a good reason 
to avoid (mis)use of this symbol.


Quite a lot has been mentioned in various threads including this one about 
making sure that Haskell stays/becomes an easy/easier language to teach to 
undergraduates. However, there is a large and growing community of 
experienced programmers coming to Haskell and liking it, and we must keep 
them in mind too. A lot of them use the #haskell IRC channel as a 
resource, and as a regular there I have the impression that the numbers 
are on their way up quite rapidly.


Cheers,

Ganesh
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-26 Thread Aaron Denney
On 2006-01-26, John Hughes <[EMAIL PROTECTED]> wrote:
> I don't think it's hard. I would just teach students to define
> functions with =, and "variables" with :=. I tell my students to write
> type signatures at the beginning anyway, so they don't risk being
> bitten by the M-R anyway. Beginning students just do what you tell
> them, and they already think of function and variable definitions as
> different. Learning a different syntax for one of them would not be a
> problem.
>
> Once they've mastered basic programming and start getting interested
> in things like overloading, then you have to explain how the M-R
> works. I'd much rather explain =/:= than try to teach them how you
> know whether a definition is shared or not right now.

And this gets back to "what the target audience for Haskell' is"
question.  Since I'm not a CS student, and I'm not teaching CS students,
this whole argument is rather unconvincing to me.

-- 
Aaron Denney
-><-

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-26 Thread Ross Paterson
On Thu, Jan 26, 2006 at 03:01:32PM +0100, John Hughes wrote:
> (I wonder what happens today, if you write mutually recursive
> definitions where the M-R applies to some, but not others?)

Under the Haskell 98 rules (4.5.5), the MR applies to whole dependency
groups.  H98 also requires (4.5.2) that the types of all variables in
a dependency group have the same context (whether MR applies or not).

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-26 Thread John Hughes

Simon Marlow wrote:


On 26 January 2006 09:59, John Hughes wrote:

 


The solution I favour is simply to use *different syntax* for the two
forms of binding, so that a definition is monomorphic, and computed
at most once, if it uses the monomorphic binding operator, and
polymorphic/overloaded, computed at each use, if it uses the other.
Whether it's a function definition or not is irrelevant, as is whether
or not it carries a type signature.

The trick is finding good syntax. I suggest = for bind-by-name, and
:= for bind-by-need.
   



The reasoning for the proposal makes complete sense to me, but I don't
feel the proposed solution strikes the right balance.  The MR is a
subtle point that we don't want to have to burden newcomers to the
language with, but having two forms of binding is a fundamental part of
the language design that would surely crop up early on the Haskell
learning curve.  John - how do you envisage teaching this?
 

I don't think it's hard. I would just teach students to define functions 
with =,
and "variables" with :=. I tell my students to write type signatures at 
the beginning
anyway, so they don't risk being bitten by the M-R anyway. Beginning 
students

just do what you tell them, and they already think of function and variable
definitions as different. Learning a different syntax for one of them 
would not

be a problem.

Once they've mastered basic programming and start getting interested in 
things
like overloading, then you have to explain how the M-R works. I'd much 
rather

explain =/:= than try to teach them how you know whether a definition is
shared or not right now.


I wonder if there's an alternative solution along these lines:

 - We use ParialTypeSignatures to make bindings monomorphic:


http://haskell.galois.com/cgi-bin/haskell-prime/trac.cgi/wiki/PartialTyp
eSigs

   eg.

 x :: _
 x = (+1)

 - we make it a static error for a variable bound by a simple pattern
   binding ("x = e") to be overloaded, unless a type signature is
given.
   The error message would explain the problem, and how to fix it.
   Alternatively, we make it a strong warning.

It seems to me that the partial type signatures extension provides a lot
of bang for the buck - it gives us a way out of the MR in addition to
partial type signatures.

I don't like this. Once students start dropping type signatures (which 
they do

pretty soon for local variables in where-clauses), they would sometimes--
unpredictably as far as they're concerned--get an error message telling them
they must put one back in again, but it's enough to write x :: _. Can 
you imagine

explaining to an average student in the first year why they MUST put in a
type signature, but it doesn't need to include a type???

Don't underestimate the difficulties many students already face. At this 
stage,

they're not even completely sure what the difference is between a type and a
value, let alone a type and a class! Understanding the effect of the 
presence
or absence of a type signature is beyond most students until much, much 
later.


If we replace or revise the M-R, the replacement should be very, very 
simple.
The M-R in its present form is a clever, and not terribly complicated 
solution

--but complicated enough to have caused no end of trouble over the years.
Let's not be clever, let's be straightforward and explicit: two binding 
forms,

two notations.

John

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-26 Thread Malcolm Wallace
"Simon Marlow" <[EMAIL PROTECTED]> writes:

> On 26 January 2006 09:59, John Hughes wrote:
> > The solution I favour is simply to use *different syntax* for the two
> > forms of binding,
> 
> I wonder if there's an alternative solution along these lines:
>   - We use ParialTypeSignatures to make bindings monomorphic:
> eg.
> 
>   x :: _
>   x = (+1)

I agree with Simon that two forms of binding feels like a heavyweight
solution.  Variable-binding is just such a fundamental thing, that
introducing a second form would need exceptional justification IMO.
However partial type signatures seem like a very nice alternative.
Just as currently, the decision on monomorphising a binding is based
on the type signature (its presence, absence, or form).

Regards,
Malcolm
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-26 Thread Philippa Cowderoy
On Thu, 26 Jan 2006, Johannes Waldmann wrote:

> If this seems impossible, then the function itself probably *is*
> complex, and its type would give valuable information,
> and I don't see what a programmer (or a reader) benefits
> from a language that allows to omit this information.
> 

For one, because that makes it possible to load it into an interpreter and 
be told the type. 

-- 
[EMAIL PROTECTED]

A problem that's all in your head is still a problem.
Brain damage is but one form of mind damage.
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


RE: The dreaded M-R

2006-01-26 Thread Simon Marlow
On 26 January 2006 09:59, John Hughes wrote:

> The solution I favour is simply to use *different syntax* for the two
> forms of binding, so that a definition is monomorphic, and computed
> at most once, if it uses the monomorphic binding operator, and
> polymorphic/overloaded, computed at each use, if it uses the other.
> Whether it's a function definition or not is irrelevant, as is whether
> or not it carries a type signature.
> 
> The trick is finding good syntax. I suggest = for bind-by-name, and
> := for bind-by-need.

The reasoning for the proposal makes complete sense to me, but I don't
feel the proposed solution strikes the right balance.  The MR is a
subtle point that we don't want to have to burden newcomers to the
language with, but having two forms of binding is a fundamental part of
the language design that would surely crop up early on the Haskell
learning curve.  John - how do you envisage teaching this?

I wonder if there's an alternative solution along these lines:

  - We use ParialTypeSignatures to make bindings monomorphic:
 
 
http://haskell.galois.com/cgi-bin/haskell-prime/trac.cgi/wiki/PartialTyp
eSigs

eg.

  x :: _
  x = (+1)

(incedentally until recently it was possible to do this
in GHC using scoped type variables, but the change in the semantics
of scoped type variables has removed that possibility).

  - we make it a static error for a variable bound by a simple pattern
binding ("x = e") to be overloaded, unless a type signature is
given.
The error message would explain the problem, and how to fix it.
Alternatively, we make it a strong warning.

It seems to me that the partial type signatures extension provides a lot
of bang for the buck - it gives us a way out of the MR in addition to
partial type signatures.

I'm not sure what to do about non-simple pattern bindings, though.

Cheers,
Simon
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-26 Thread John Hughes

Johannes Waldmann wrote:


(entering ironic mode, but not quite:)

So, what about making type signatures mandatory,
as the rest of the civilized world does happily for decades ...

 



If that's a serious proposal, then I'll argue against it--but do we
really want to raise that question? One of the strengths of
Haskell is that it supports both implicit and explicit typing
well.

John
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-26 Thread Henrik Nilsson

Dear all,

Johannes Waldmann wrote:

> So, what about making type signatures mandatory,
> as the rest of the civilized world does happily for decades ...

Given that explicit type signatures increasingly are required
for dealing with other aspects (polymorphic recursion,
rank 2-or-higher polymorphism, GADTs ...) that would
seem reasonable.

Personally, though, I have to admit that I've never had
all that much problems with the M-R restriction in the
first place. Probably because I do write top-level
type signatures as soon as I get into serious programming.

That said, I do find it convenient that type signatures
can be omitted.

And I wonder if this is a sufficiently significant problem
to warrant breaking backwards compatibility in this respect.

All the best,

/Henrik

--
Henrik Nilsson
School of Computer Science and Information Technology
The University of Nottingham
[EMAIL PROTECTED]


This message has been checked for viruses but the contents of an attachment
may still contain software viruses, which could damage your computer system:
you are advised to perform your own checks. Email communications with the
University of Nottingham may be monitored as permitted by UK legislation.

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-26 Thread Ross Paterson
On Thu, Jan 26, 2006 at 10:59:22AM +0100, John Hughes wrote:
> The fact is, Haskell has two different binding mechanisms--bind-by-name
> (used for overloaded definitions), and bind-by-need (monomorphic). Both
> are useful: bind-by-name lets us name overloaded expressions, while
> bind-by-need gives performance guarantees. The trouble is just the way
> we distinguish them--where the compiler is basically guessing from the
> form of a definition which one to use.
> [...]
> The solution I favour is simply to use *different syntax* for the two
> forms of binding, so that a definition is monomorphic, and computed
> at most once, if it uses the monomorphic binding operator, and
> polymorphic/overloaded, computed at each use, if it uses the other.
> Whether it's a function definition or not is irrelevant, as is whether
> or not it carries a type signature.
> 
> The trick is finding good syntax. I suggest = for bind-by-name, and
> := for bind-by-need. (Some object that := "means" assignment--but come
> on, we're not reserving := for future use as assignment in Haskell, are we?
> Why should we give up a perfectly good symbol because it's used elsewhere
> to mean something else?). With this notation, = would be appropriate
> for function definitions, and := for most non-function definitions. It
> would be instantly clear where there was a possibility of repeated
> evaluation, and where not.
> 
> The problem with making such a syntactic distinction is that, however
> it's done, many changes must be made to existing programs. Just because
> existing programs contain many bindings of each sort, there's no
> getting away from the fact that a syntactic distinction will force
> changes. In principle this could be automated, of course--not hard
> but somebody would have to do it. But perhaps it would be worth it,
> to eliminate probably the number one wart, and solve the problems
> above.

You're proposing that the =/:= distinction both decides whether
constrained type variables are monomorphic and whether the binding
should be implemented using sharing.  If it only did the former (and the
expectation was that all pattern bindings with unconstrained types used
sharing), then existing legal programs would still be legal, and the
examples that currently trip over the MR would be legal but inefficient.
(Also, what does the shared/unshared distinction mean for functions?)

What if one has mutually recursive bindings, some using = and some := ?
Does monomorphism kick in if some of the variables in a binding group
use :=, or would we just require that all bindings in the same group
use the same binder?  (At first I couldn't see why one would ever use :=
with function bindings, but perhaps that's the reason.)

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: The dreaded M-R

2006-01-26 Thread Johannes Waldmann
John Hughes wrote:

> * You can't eta-convert definitions freely, if there is no type signature.
...
> * Definitions without a type-signature can change ...

(entering ironic mode, but not quite:)

So, what about making type signatures mandatory,
as the rest of the civilized world does happily for decades ...

Yeah I know this would "break" some programs,
but aren't these "broken" from the start
because they are missing the easiest and safest
and most effective way of documentation?

If you say "writing out all type signatures is awkward",
then exactly why? Because the type system is too complex?
Then it should be fixed. I think it's not. Then perhaps
because the types of the functions are too complex?
Then these functions should be fixed (by refactoring,
introducing helper type names, etc.).

If this seems impossible, then the function itself probably *is*
complex, and its type would give valuable information,
and I don't see what a programmer (or a reader) benefits
from a language that allows to omit this information.

Respectfully submitted,
-- 
-- Johannes Waldmann -- Tel/Fax (0341) 3076 6479/80 --
 http://www.imn.htwk-leipzig.de/~waldmann/ ---

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


The dreaded M-R

2006-01-26 Thread John Hughes
I had promised myself not to propose anything that was not already 
implemented,
tried, and tested, but I just can't resist bringing up a proposal I've 
made in the

past to fix the monomorphism restriction. Maybe now is the time to do so.

I know many will proclaim "just get rid of it", but consider this: 
without the

M-R, some programs can run exponentially slower than you expect. This
actually happened to me, which is how we discovered something of the sort
was needed in the first place. But the current design is surely a wart.

The fact is, Haskell has two different binding mechanisms--bind-by-name
(used for overloaded definitions), and bind-by-need (monomorphic). Both
are useful: bind-by-name lets us name overloaded expressions, while
bind-by-need gives performance guarantees. The trouble is just the way
we distinguish them--where the compiler is basically guessing from the
form of a definition which one to use. Two problems that leads to:

* You can't eta-convert definitions freely, if there is no type signature.
 We've all hit this one, where you write something like sum=foldr(+)0
 and you can't export it, because it's monomorphic.

* Definitions without a type-signature can change from polymorphic
 to monomorphic as a result of changes elsewhere in the program.
 Because the M-R applies only to overloaded definitions, then introducing,
 for example, an equality test in a function the definition calls can
 change its type, and make the M-R suddenly apply where it did not before.
 That can lead to unexpected errors.

The solution I favour is simply to use *different syntax* for the two
forms of binding, so that a definition is monomorphic, and computed
at most once, if it uses the monomorphic binding operator, and
polymorphic/overloaded, computed at each use, if it uses the other.
Whether it's a function definition or not is irrelevant, as is whether
or not it carries a type signature.

The trick is finding good syntax. I suggest = for bind-by-name, and
:= for bind-by-need. (Some object that := "means" assignment--but come
on, we're not reserving := for future use as assignment in Haskell, are we?
Why should we give up a perfectly good symbol because it's used elsewhere
to mean something else?). With this notation, = would be appropriate
for function definitions, and := for most non-function definitions. It
would be instantly clear where there was a possibility of repeated
evaluation, and where not.

The problem with making such a syntactic distinction is that, however
it's done, many changes must be made to existing programs. Just because
existing programs contain many bindings of each sort, there's no
getting away from the fact that a syntactic distinction will force
changes. In principle this could be automated, of course--not hard
but somebody would have to do it. But perhaps it would be worth it,
to eliminate probably the number one wart, and solve the problems
above.

I put it on the table, anyway.

John

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime