Re: [Haskell-cafe] Actors and message-passing a la Erlang

2010-07-26 Thread Martin Sulzmann
Not distributed (yet) but concurrent:
http://hackage.haskell.org/package/actor

The paper  Actors with Multi-headed Message Receive Patterns. COORDINATION
2008http://www.informatik.uni-trier.de/%7Eley/db/conf/coordination/coordination2008.html#SulzmannLW08:
describes the design rationale.

Cheers,
  Martin

On Sun, Jul 25, 2010 at 10:55 PM, Yves Parès limestr...@gmail.com wrote:

 Hello !

 I've been studying Erlang and Scala, and I was wondering if someone has
 already implemented an actors and message passing framework for concurrent
 and distributed programs in Haskell.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] checking types with type families

2010-07-01 Thread Martin Sulzmann
On Thu, Jul 1, 2010 at 7:54 PM, Simon Peyton-Jones simo...@microsoft.comwrote:


 |  Also, what is the difference between fundeps and type families
 |  wrt local type constraints? I had always assumed them to be
 |  equivalent, if fully implemented. Similar to logic vs functional
 |  programming, where Haskellers tend to find the latter more
 |  convenient. Functional logic programming shows that there
 |  are some tricks missing if one just drops the logic part.

 Until now, no one has know how to combine fundeps and local constraints.
  For example

  class C a b | a-b where
op :: a - b

   instance C Int Bool where
 op n = n0

  data T a where
T1 :: T a
T2 :: T Int

  -- Does this typecheck?
  f :: C a b = T a - Bool
  f T1 = True
  f T2 = op 3

 The function f should typecheck because inside the T2 branch we know that
 (a~Int), and hence by the fundep (b~Bool).  But we have no formal type
 system for fundeps that describes this, and GHC's implementation certainly
 rejects it.


Martin Sulzmann, Jeremy
Waznyhttp://www.informatik.uni-trier.de/%7Eley/db/indices/a-tree/w/Wazny:Jeremy.html,
Peter J. 
Stuckeyhttp://www.informatik.uni-trier.de/%7Eley/db/indices/a-tree/s/Stuckey:Peter_J=.html:
A Framework for Extended Algebraic Data Types. FLOPS
2006http://www.informatik.uni-trier.de/%7Eley/db/conf/flops/flops2006.html#SulzmannWS06:
47-64

describes such a system, fully implemented in Chameleon, but this
system is no longer maintained.

Type families and Fundeps are equivalent in expressive power and it's
not too hard to show how to encode one in terms of the other.
Local constraints are an orthogonal extension. In terms of type inference,
type families + local constraints and fundeps + local constraints pose the
same
challenges.

Probably, Simon is refrerring to the 'unresolved' issue of providing a
System F style translation for fundeps + local constraints. Well, the point
is that System FC
is geared toward type families. The two possible solutions are (a) either
consider fundeps as syntactic sugar for type families (doesn't quite work
once
you throw in overlapping instances), (b) design a variant System FC_fundep
which has built-in support for fundeps.

-Martin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Non-termination of type-checking

2010-01-29 Thread Martin Sulzmann
On Fri, Jan 29, 2010 at 8:56 AM, o...@okmij.org wrote:


 Here is a bit more simplified version of the example. The example has
 no value level recursion and no overt recursive types, and no impredicative
 polymorphism. The key is the observation, made earlier, that two types
c (c ()) and R (c ())
 unify when c = R. Although the GADTs R c below is not recursive, when
 we instantiate c = R, it becomes recursive, with the negative
 occurrence. The trouble is imminent.

 We reach the conclusion that an instance of a non-recursive GADT
 can be a recursive type. GADT may harbor recursion, so to speak.

 The code below, when loaded into GHCi 6.10.4, diverges on
 type-checking. It type-checks when we comment-out absurd.


 {-# LANGUAGE GADTs, EmptyDataDecls #-}

 data False  -- No constructors

 data R c where  -- Not recursive
R :: (c (c ()) - False) - R (c ())

 -- instantiate c to R, so (c (c ())) and R (c ()) coincide
 -- and we obtain a recursive type
 --mu R. (R (R ()) - False) - R (R ())

 cond_false :: R (R ()) - False
 cond_false x@(R f) = f x

 absurd :: False
 absurd = cond_false (R cond_false)


GHC (the compiler terminates)

The following variants terminate, either with GHCi or GHC,

absurd1 :: False
absurd1 = let x = (R cond_false)
  in cond_false x

absurd2 =  R cond_false

absurd3 x = cond_false x

absurd4 :: False
absurd4 = absurd3 absurd2

This suggests there's a bug in the type checker.
If i scribble down the type equation, I can't see
why the type checker should loop here.

-Martin




 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Restrictions on associated types for classes

2009-12-17 Thread Martin Sulzmann
The statements

class Cl [a] = Cl a

and

instance Cl a = Cl [a]

(I omit the type family constructor in the head for simplicyt)

state the same (logical) property:

For each Cl t there must exist Cl [t].

Their operational meaning is different under the dictionary-passing
translation [1].
The instance declaration says we build dictionary Cl [a] given the
dictionary Cl [a]. The super class declaration says that the dictionary for
Cl [a]
must be derivable (extractable) from Cl a's dictionary. So, here
we run into a cycle (on the level of terms as well as type inference).

However, if we'd adopt a type-passing translation [2] (similar to
dynamic method lookup in oo languages) then there isn't
necessarily a cycle (for terms and type inference). Of course,
we still have to verify the 'cyclic' property which smells like
we run into non-termination if we assume some inductive reason
(but we might be fine applying co-induction).

-Martin

[1] Cordelia V. Hall, Kevin
Hammondhttp://www.informatik.uni-trier.de/%7Eley/db/indices/a-tree/h/Hammond:Kevin.html,
Simon L. Peyton
Joneshttp://www.informatik.uni-trier.de/%7Eley/db/indices/a-tree/j/Jones:Simon_L=_Peyton.html,
Philip 
Wadlerhttp://www.informatik.uni-trier.de/%7Eley/db/indices/a-tree/w/Wadler:Philip.html:
Type Classes in Haskell. ACM Trans. Program. Lang. Syst.
18http://www.informatik.uni-trier.de/%7Eley/db/journals/toplas/toplas18.html#HallHJW96(2):
109-138 (1996)

[2] Satish R. Thatte: Semantics of Type Classes Revisited. LISP and
Functional Programming
1994http://www.informatik.uni-trier.de/%7Eley/db/conf/lfp/lfp1994.html#Thatte94:
208-219

On Thu, Dec 17, 2009 at 6:40 PM, Simon Peyton-Jones
simo...@microsoft.comwrote:

 |  Hmm.  If you have
 |class (Diff (D f)) = Diff f where
 | 
 |  then if I have
 |  f :: Diff f = ...
 |  f = e
 |  then the constraints available for discharging constraints arising
 |  from e are
 |  Diff f
 |  Diff (D f)
 |  Diff (D (D f))
 |  Diff (D (D (D f)))
 |  ...
 | 
 |  That's a lot of constraints.
 |
 | But isn't it a bit like having an instance
 |
 |Diff f = Diff (D f)

 A little bit.  And indeed, could you not provide such instances?  That is,
 every time you write an equation for D, such as
 type D (K a) = K Void
 make sure that Diff (K Void) also holds.

 The way you it, when you call f :: Diff f = blah, you are obliged to
 pass runtime evidence that (Diff f) holds.  And that runtime evidence
 includes as a sub-component runtime evidence that (Diff (D f)) holds.   If
 you like the, the evidence for Diff f looks like this:
data Diff f = MkDiff (Diff (D f)) (D f x - x - f x)
 So you are going to have to build an infinite data structure.  You can do
 that fine in Haskell, but type inference looks jolly hard.

 For example, suppose we are seeking evidence for
Diff (K ())
 We might get such evidence from either
  a) using the instance decl
 instance Diff (K a) where ...
 or
  b) using the fact that (D I) ~ K (), we need Diff I, so
we could use the instance
  instance Diff I

 Having two ways to get the evidence seems quite dodgy to me, even apart
 from the fact that I have no clue how to do type inference for it.

 Simon
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re:

2009-10-14 Thread Martin Sulzmann
On Wed, Oct 14, 2009 at 7:33 AM, o...@okmij.org wrote:


 Martin Sulzmann wrote:
  Undecidable instances means that there exists a program for which
 there's
  an infinite reduction sequence.

 I believe this may be too strong of a statement. There exists patently
 terminating type families that still require undecidable
 instances in GHC.


Sorry, I wasn't precise enough.

I didn't mean to say that *every* program which requires undecidable
instance won't terminate.

Rather, take any of the properties which imply decidability. Then,
there *exists* a program which satisfies the negated property and this
program won't terminate.

As you show, for specific cases we can argue that undecidable instances
are decidable. You can even argue that the Add/Mult example is decidable,
assuming we never generate loopy type constraints.

-Martin


 Here is an example:

  {-# LANGUAGE TypeFamilies #-}
 
  type family I x :: *
  type instance I x = x
 
  type family B x :: *
  type instance B x = I x


 GHC 6.8.3 complaints:
Application is no smaller than the instance head
  in the type family application: I x
(Use -fallow-undecidable-instances to permit this)
In the type synonym instance declaration for `B'

 But there cannot possibly be any diverging reduction sequence here, can it?
 The type family I is the identity, and the type family B is its
 alias. There is no recursion. The fact that type families are open is
 not relevant here: our type families I and B are effectively closed,
 because one cannot define any more instance for I and B (or risk
 overlap, which is rightfully not supported for type families).

 The reason GHC complains is because it checks termination
 instance-by-instance. To see the termination in the above program, one
 should consider instances I and B together. Then we will see that I
 does not refer to B, so there are no loops. But this global
 termination check -- for a set of instances -- is beyond the
 abilities of GHC. This is arguably the right decision: after all, GHCi
 is not a full-blown theorem prover.

 Thus there are perfectly decidable type programs that require
 undecidable instances. Indeed, there is no reason to be afraid of that
 extension.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Type-level naturals multiplication

2009-10-13 Thread Martin Sulzmann
On Tue, Oct 13, 2009 at 9:37 AM, Simon Peyton-Jones
simo...@microsoft.comwrote:

 |  Is there any way to define type-level multiplication without requiring
 |  undecidable instances?
 |
 | No, not at the moment.  The reasons are explained in the paper Type
 | Checking with Open Type Functions (ICFP'08):
 |
 |
 http://www.cse.unsw.edu.au/~chak/papers/tc-tfs.pdfhttp://www.cse.unsw.edu.au/%7Echak/papers/tc-tfs.pdf
 |
 | We want to eventually add closed *type families* to the system (ie,
 | families where you can't add new instances in other modules).  For
 | such closed families, we should be able to admit more complex
 | instances without requiring undecidable instances.

 It's also worth noting that while undecidable instances sound scary, but
 all it means is that the type checker can't prove that type inference will
 terminate.  We accept this lack-of-guarantee for the programs we *run*, and
 type inference can (worst case) take exponential time which is not so
 different from failing to terminate; so risking non-termination in type
 inference is arguably not so bad.



Some further details to shed some light on this topic.

Undecidable instances means that there exists a program for which there's
an infinite reduction sequence.
By undecidable I refer to instances violating the conditions in the
icfp'08
and in the earlier jfp paper Understanding Functional Dependencies via
Constraint Handling Rules.

Consider the classic example

  Add (Succ x) x ~ x
-- Succ (Add x x) ~ x

 substitute for x and you'll get another redex of the form

 Add (Succ ..) ... and therefore the reduction won't terminate

To fix this problem, i.e. preventing the type checker to non-terminate,
we could either

 (a) spot the loop in Add (Succ x) x ~ x  and reject this
   unsatisfiable constraint and thus the program
 (b) simply stop after n steps

The ad-hoc approach (b) restores termination but risks incompleteness.

Approach (a) is non-trivial to get right, there are more complicated loopy
programs
where spotting the loops gets really tricky.

The bottom line is this:

Running the type checker on undecidable instances means that
there are programs for which

   - the type checker won't terminate, or
- wrongly rejects the program (incompleteness)

So, the situation is slightly more scary, BUT
programs exhibiting the above behavior are (in my experience)
rare/contrived.

-Martin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Resolving overloading loops for circular constraint graph

2009-09-10 Thread Martin Sulzmann
2009/9/9 Stefan Holdermans ste...@cs.uu.nl

 Dear Martin,

  By black-holing you probably mean co-induction. That is,
 if the statement to proven appears again, we assume it must hold.
 However, type classes are by default inductive, so there's no
 easy fix to offer to your problem.


 A propos: are there fundamental objections to coinductive resolving, i.e.,
 constructing recursive dictionaries. I suppose termination is hard to
 guarantee then: is it enough to require constraints to be productive w.r.t.
 instance heads?


Yes, you need instances to be productive, otherwise, you get
bogus cyclic proofs like
instance Foo a = Foo a

dictFooa = dictFooa

You could call this a bug, or simply blame the programmer
for writing down a 'bogus' instance.

Under co-inductive type class solving more (type class) programs will
terminate (my guess). Here are some references:

Satish R. Thatte: Semantics of Type Classes Revisited. LISP and Functional
Programming 
1994http://www.informatik.uni-trier.de/%7Eley/db/conf/lfp/lfp1994.html#Thatte94:
208-219

As far as I know, the first paper which talks about co-induction and type
classes.

I myself and some co-workers explored this idea further in some
unpublished draft:

 AUTHOR = {M. Sulzmann and J. Wazny and P. J. Stuckey},
  TITLE = {Co-induction and Type Improvement in Type Class Proofs},

  NOTE = {Manuscript},
  YEAR = {2005},
  MONTH = {July},
  PS = {http://www.cs.mu.oz.au/~sulzmann/manuscript/coind-type-class-proofs.ps
http://www.cs.mu.oz.au/%7Esulzmann/manuscript/coind-type-class-proofs.ps}


I'm quite fond of co-induction because it's such an elegant and powerful
proof technique. However, GHC's co-induction mechanism is fairly limited,
see Simon's reply, so I'm also curious to see what's happening in the
future.

-Martin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Resolving overloading loops for circular constraint graph

2009-09-09 Thread Martin Sulzmann
Your example must loop because as you show below
the instance declaration leads to a cycle.

By black-holing you probably mean co-induction. That is,
if the statement to proven appears again, we assume it must hold.
However, type classes are by default inductive, so there's no
easy fix to offer to your problem.

In any case, this is not a bug, you simply play with fire
once type functions show up in the instance context.
There are sufficient conditions to guarantee termination,
but they are fairly restrictive.

-Martin

2009/9/9 Stefan Holdermans ste...@cs.uu.nl

 Manuel, Simon,

 I've spotted a hopefully small but for us quite annoying bug in GHC's type
 checker: it loops when overloading resolving involves a circular constraint
 graph containing type-family applications.

 The following program (also attached) demonstrates the problem:

  {-# LANGUAGE FlexibleContexts #-}
  {-# LANGUAGE TypeFamilies #-}

  type family F a :: *
  type instance F Int = (Int, ())

  class C a
  instance C ()
  instance (C (F a), C b) = C (a, b)

  f :: C (F a) = a - Int
  f _ = 2

  main :: IO ()
  main = print (f (3 :: Int))

 My guess is that the loop is caused by the constraint C (F Int) that arises
 from the use of f in main:

  C (F Int) = C (Int, ()) = C (F Int)

 Indeed, overloading can be resolved successfully by black-holing the
 initial constraint, but it seems like the type checker refuses to do so.

 Can you confirm my findings?

 I'm not sure whether this is a known defect. If it isn't, I'd be more than
 happy to issue a report.

 Since this problem arises in a piece of very mission-critical code, I would
 be pleased to learn about any workarounds.

 Thanks in advance,

  Stefan
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof that Haskell is RT

2008-11-12 Thread Martin Sulzmann

Lennart Augustsson wrote:

You can't write a straightforward dynamic semantics (in, say,
denotational style) for Haskell.
The problem is that with type classes you need to know the types
compute the values.
You could use a two step approach: first make a static semantics that
does type inference/checking and translates Haskell into a different
form that has resolved all overloading.
And, secondly, you can write a dynamic semantics for that language.

But since the semantics has to have the type inference engine inside
it, it's going to be a pain.

It's possible that there's some more direct approach that represents
types as some kind of runtime values, but nobody (to my knowledge) has
done that.

  

This has been done. For example, check out the type class semantics given in

Thatte, Semantics of type classes revisited, LFP '94
Stuckey and Sulzmann, A Theory of Overloading, TOPLAS'05

Harrison: A Simple Semantics for Polymorphic Recursion. APLAS 2005
is also related I think.

The ICFP'08 poster

Unified Type Checking for Type Classes and Type Families , Tom 
Schrijvers and Martin Sulzmann


suggests that a type-passing semantics can even be programmed by 
(mis)using type families.


- Martin



  -- Lennart

On Wed, Nov 12, 2008 at 12:39 PM, Luke Palmer [EMAIL PROTECTED] wrote:
  

On Wed, Nov 12, 2008 at 3:21 AM, Jules Bean [EMAIL PROTECTED] wrote:


Andrew Birkett wrote:
  

Hi,

Is a formal proof that the Haskell language is referentially transparent?
 Many people state haskell is RT without backing up that claim.  I know
that, in practice, I can't write any counter-examples but that's a bit
handy-wavy.  Is there a formal proof that, for all possible haskell
programs, we can replace coreferent expressions without changing the meaning
of a program?


The (well, a natural approach to a) formal proof would be to give a formal
semantics for haskell.
  

Haskell 98 does not seem that big to me (it's not teeny, but it's
nothing like C++), yet we are continually embarrassed about not having
any formal semantics.  What are the challenges preventing its
creation?

Luke
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe
  


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof that Haskell is RT

2008-11-12 Thread Martin Sulzmann


Correct Lennart. The below mentioned papers assume some
evidence translation of type class programs. If you need/want
a direct semantics/translation of programs you'll need to
impose some restrictions on the set of allowable type class programs.

For such an approach see

  Martin Odersky, Philip Wadler, Martin Wehr: A Second Look at 
Overloading. FPCA 1995:


Roughly, the restriction says, you can overload the argument but not the 
result (types).


- Martin


Lennart Augustsson wrote:

I had a quick look at Stuckey and Sulzmann, A Theory of Overloading
and it looks to me like the semantics is given by evidence
translation.  So first you do evidence translation, and then give
semantics to the translated program.  So this looks like the two step
approach I first mentioned.
Or have I misunderstood what you're doing?

What I meant hasn't been done is a one step semantics directly in
terms of the source language.
I guess I also want it to be compositional, which I think is where
things break down.

  -- Lennart

On Wed, Nov 12, 2008 at 2:26 PM, Martin Sulzmann
[EMAIL PROTECTED] wrote:
  

Lennart Augustsson wrote:


You can't write a straightforward dynamic semantics (in, say,
denotational style) for Haskell.
The problem is that with type classes you need to know the types
compute the values.
You could use a two step approach: first make a static semantics that
does type inference/checking and translates Haskell into a different
form that has resolved all overloading.
And, secondly, you can write a dynamic semantics for that language.

But since the semantics has to have the type inference engine inside
it, it's going to be a pain.

It's possible that there's some more direct approach that represents
types as some kind of runtime values, but nobody (to my knowledge) has
done that.


  

This has been done. For example, check out the type class semantics given in

Thatte, Semantics of type classes revisited, LFP '94
Stuckey and Sulzmann, A Theory of Overloading, TOPLAS'05

Harrison: A Simple Semantics for Polymorphic Recursion. APLAS 2005
is also related I think.

The ICFP'08 poster

Unified Type Checking for Type Classes and Type Families , Tom Schrijvers
and Martin Sulzmann

suggests that a type-passing semantics can even be programmed by (mis)using
type families.

- Martin




 -- Lennart

On Wed, Nov 12, 2008 at 12:39 PM, Luke Palmer [EMAIL PROTECTED] wrote:

  

On Wed, Nov 12, 2008 at 3:21 AM, Jules Bean [EMAIL PROTECTED]
wrote:



Andrew Birkett wrote:

  

Hi,

Is a formal proof that the Haskell language is referentially
transparent?
 Many people state haskell is RT without backing up that claim.  I
know
that, in practice, I can't write any counter-examples but that's a bit
handy-wavy.  Is there a formal proof that, for all possible haskell
programs, we can replace coreferent expressions without changing the
meaning
of a program?



The (well, a natural approach to a) formal proof would be to give a
formal
semantics for haskell.

  

Haskell 98 does not seem that big to me (it's not teeny, but it's
nothing like C++), yet we are continually embarrassed about not having
any formal semantics.  What are the challenges preventing its
creation?

Luke
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

  


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] looking for examples of non-full Functional Dependencies

2008-04-21 Thread Martin Sulzmann


I now recall the reason for NOT using

D a b, D [a] c == c = [b]


The reason is that the above rule
creates a new critical pair with

instance D a b = D [a] [b]

To resolve the critical pair we need yet another rule

D a b, D [[a]] c == c =[[b]]

You can already see where this leads to.

In general, we need an infinite rules of the form

D a b, D [a]^k c == c = [b]^k

where

[a]^k stands for the k nested list [ ... [a] ... ]

The FD-CHR approach therefore proposes to use

D [a] c == c = [b]

which is a stronger rule (that is, is not a logical consequence).

Martin


Martin Sulzmann wrote:

Thanks Iavor! Things become now clear.

Let's consider our running example

class D a b | a - b
instance D a b = D [a] [b]

which we can write in CHR notation

D a b, D a c == b=c(FD)

D [a] [b] = D a b   (Inst)

These rules overlap.

Let's consider the critical pair

D [a] [b], D [a] c

The following two derivations are possible

   D [a] [b], D [a] c
  --FD   D [a] [b], c = [b]
  --Inst   D a b, c = [b]


  D [a] [b], D [a] c
  --Inst D a b, D [a] c

The two final stores differ which means that the
CHR system is non-confluent. Hence, the
proof theory is (potentially) incomplete.
What does this mean?
Logically true statement may not be derivable
using our CHR/typeclass-FD solver.

Iavor suggests to add the rule

D [a] c, D a b == c = [b](Imp1)

Makes perfect sense!

This rule is indeed a theorem and makes the system confluent.

But that's not what the FD-CHR paper does.

The FD-CHR paper generates the stronger rule

D [a] c == c = [b] (Imp2)

This rule is NOT a theorem (ie logical consequence from the
FD and Inst rule).
But this rule also makes the system confluent.

Why does the FD-CHR paper suggest this rule.
Some reasons:

- the (Imp2) matches the GHC and I believe also Hugs implementation
- the (Imp2) is easier to implement, we only need to look for
  a single constraint when firing the rule
- (Arguable) The (Imp2) matches better the programmer's intuition.
  We only consider the instance head when generating improvement
  but ignore the instance context.
- (Historical note: The rule suggested by Iavor were discussed
  when writing the FD-CHR paper but somehow we never
  pursued this alternative design space.
  I have to dig out some old notes, maybe there some other reasons,
  infinite completion, why this design space wasn't pursued).

To summarize, I see now where the confusion lies.
The FD-CHR studies a stronger form of FDs where the CHR
improvement rules generated guarantee confluence but the
rules are not necessarily logical consequence.
Therefore, the previously discussed property

 a - b and a - c iff a - b c

does of course NOT hold. That is,
the combination of improvement rules derived from a - b and a - c
is NOT equivalent to the improvement rules derived from a - b c.
Logically, the equivalence obviously holds.

Martin


Iavor Diatchki wrote:

Hello,

On Thu, Apr 17, 2008 at 12:05 PM, Martin Sulzmann
[EMAIL PROTECTED] wrote:
 
 Can you pl specify the improvement rules for your interpretation of 
FDs.

That would help!



Each functional dependency on a class adds one extra axiom to the
system (aka CHR rule, improvement rule).  For the example in question
we have:

class D a b | a - b where ...

the extra axiom is:

forall a b c. (D a b, D a c) = (b = c)

This is the definition of functional dependency---it specifies that
the relation 'D' is functional.  An improvement rule follows from a
functional dependency if it can be derived from this rule.  For
example, if we have an instance (i.e., another axiom):

instance D Char Bool

Then we can derive the following theorem:

(D Char a) = (a = Bool)

I think that in the CHR paper this was called instance improvement.
Note that this is not an extra axiom but rather a theorem---adding it
to the system as an axiom does not make the system any more
expressive.  Now consider what happens when we have a qualified
instance:

instance D a a = D [a] [a]

We can combine this with the FD axiom to get:

(D a a, D [a] b) = b = [a]

This is all that follows from the functional dependency.  Of course,
in the presence of other instances, we could obtain more improvement
rules.

As for the consistency rule, it is intended to ensure that instances
are consistent with the FD axiom.  As we saw from the previous
examples, it is a bit conservative in that it rejects some instances
that do not violate the functional dependency.   Now, we could choose
to exploit this fact to compute stronger improvement rules---nothing
wrong with that.  However, this goes beyond FDs.

-Iavor








 

 I'm simply following Mark Jones' style FDs.

 Mark's ESOP'00 paper has a consistency condition:
 If two instances match on the FD domain then the must also match on 
their

range.
 The motivation for this condition is to avoid inconsistencies when
 deriving improvement rules from instances.

 For




 

 class D a b | a - b

 instance D a a = D

Re: [Haskell-cafe] looking for examples of non-full Functional Dependencies

2008-04-18 Thread Martin Sulzmann

Lennart Augustsson wrote:
To reuse a favorite word, I think that any implementation that 
distinguishes 'a - b, a - c' from 'a - b c' is broken. :)
It does not implement FD, but something else.  Maybe this something 
else is useful, but if one of the forms is strictly more powerful than 
the other then I don't see why you would ever want the less powerful one.



Do you have any good examples, besides the contrived one

class D a b c | a - b c

instance D a b b = D [a] [b] [b]

where we want to have the more powerful form of multi-range FDs?

Fixing the problem who mention is easy. After all, we know how to derive
improvement for multi-range FDs. But it seems harder to find agreement 
whether

multi-range FDs are short-hands for single-range FDs, or
certain single-range FDs, eg a - b and a - c, are shorthands for more 
powerful

multi-range FDs a - b c.
I clearly prefer the latter, ie have a more powerful form of FDs.

Martin


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] looking for examples of non-full Functional Dependencies

2008-04-18 Thread Martin Sulzmann

Thanks Iavor! Things become now clear.

Let's consider our running example

class D a b | a - b
instance D a b = D [a] [b]

which we can write in CHR notation

D a b, D a c == b=c(FD)

D [a] [b] = D a b   (Inst)

These rules overlap.

Let's consider the critical pair

D [a] [b], D [a] c

The following two derivations are possible

   D [a] [b], D [a] c
  --FD   D [a] [b], c = [b]
  --Inst   D a b, c = [b]


  D [a] [b], D [a] c
  --Inst D a b, D [a] c

The two final stores differ which means that the
CHR system is non-confluent. Hence, the
proof theory is (potentially) incomplete.
What does this mean?
Logically true statement may not be derivable
using our CHR/typeclass-FD solver.

Iavor suggests to add the rule

D [a] c, D a b == c = [b](Imp1)

Makes perfect sense!

This rule is indeed a theorem and makes the system confluent.

But that's not what the FD-CHR paper does.

The FD-CHR paper generates the stronger rule

D [a] c == c = [b] (Imp2)

This rule is NOT a theorem (ie logical consequence from the
FD and Inst rule).
But this rule also makes the system confluent.

Why does the FD-CHR paper suggest this rule.
Some reasons:

- the (Imp2) matches the GHC and I believe also Hugs implementation
- the (Imp2) is easier to implement, we only need to look for
  a single constraint when firing the rule
- (Arguable) The (Imp2) matches better the programmer's intuition.
  We only consider the instance head when generating improvement
  but ignore the instance context.
- (Historical note: The rule suggested by Iavor were discussed
  when writing the FD-CHR paper but somehow we never
  pursued this alternative design space.
  I have to dig out some old notes, maybe there some other reasons,
  infinite completion, why this design space wasn't pursued).

To summarize, I see now where the confusion lies.
The FD-CHR studies a stronger form of FDs where the CHR
improvement rules generated guarantee confluence but the
rules are not necessarily logical consequence.
Therefore, the previously discussed property

 a - b and a - c iff a - b c

does of course NOT hold. That is,
the combination of improvement rules derived from a - b and a - c
is NOT equivalent to the improvement rules derived from a - b c.
Logically, the equivalence obviously holds.

Martin


Iavor Diatchki wrote:

Hello,

On Thu, Apr 17, 2008 at 12:05 PM, Martin Sulzmann
[EMAIL PROTECTED] wrote:
  

 Can you pl specify the improvement rules for your interpretation of FDs.
That would help!



Each functional dependency on a class adds one extra axiom to the
system (aka CHR rule, improvement rule).  For the example in question
we have:

class D a b | a - b where ...

the extra axiom is:

forall a b c. (D a b, D a c) = (b = c)

This is the definition of functional dependency---it specifies that
the relation 'D' is functional.  An improvement rule follows from a
functional dependency if it can be derived from this rule.  For
example, if we have an instance (i.e., another axiom):

instance D Char Bool

Then we can derive the following theorem:

(D Char a) = (a = Bool)

I think that in the CHR paper this was called instance improvement.
Note that this is not an extra axiom but rather a theorem---adding it
to the system as an axiom does not make the system any more
expressive.  Now consider what happens when we have a qualified
instance:

instance D a a = D [a] [a]

We can combine this with the FD axiom to get:

(D a a, D [a] b) = b = [a]

This is all that follows from the functional dependency.  Of course,
in the presence of other instances, we could obtain more improvement
rules.

As for the consistency rule, it is intended to ensure that instances
are consistent with the FD axiom.  As we saw from the previous
examples, it is a bit conservative in that it rejects some instances
that do not violate the functional dependency.   Now, we could choose
to exploit this fact to compute stronger improvement rules---nothing
wrong with that.  However, this goes beyond FDs.

-Iavor








  

 I'm simply following Mark Jones' style FDs.

 Mark's ESOP'00 paper has a consistency condition:
 If two instances match on the FD domain then the must also match on their
range.
 The motivation for this condition is to avoid inconsistencies when
 deriving improvement rules from instances.

 For




  

 class D a b | a - b

 instance D a a = D [a] [a]
 instance D [Int] Char


 we get

 D [a] b == b =[a]
 D [Int] b == b=Char

 In case of

 D [Int] b we therefore get b=Char *and* b =[a] which leads to a
(unification) error.
 The consistency condition avoids such situations.


 The beauty of formalism FDs with CHRs (or type functions/families) is that
 the whole improvement process becomes explicit. Of course, it has to match
 the programmer's intuition. See the discussion regarding multi-range FDs.

 Martin





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org

Re: [Haskell-cafe] looking for examples of non-full Functional Dependencies

2008-04-18 Thread Martin Sulzmann

Lennart Augustsson wrote:

I've never thought of one being shorthand for the other, really.
Since they are logically equivalent (in my interpretation) I don't 
really care which one we regard as more primitive.

True. See my response to Iavor's recent email.

Martin

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] looking for examples of non-full Functional Dependencies

2008-04-17 Thread Martin Sulzmann

Mark P Jones wrote:

Martin Sulzmann wrote:
We're also looking for (practical) examples of multi-range 
functional dependencies


class C a b c | c - a b

Notice that there are multiple (two) parameters in the range of the FD.

It's tempting to convert the above to

class C a b c | c - a, c - b

but this yields a weaker (in terms of type improvement) system.


I agree with Iavor.

In fact, the two sets of dependencies that you have given here
are provably equivalent, so it would be decidedly odd to have
a type improvement system that distinguishes between them.



Consider

class C a b c | a - b, a - c

instance C a b b = C [a] [b] [b]

Suppose we encounter the constraint

C [x] y z

What results can we expect from type improvement?

1) Single-range non-full FDs in GHC following the FD-CHR formulation:

The first FD C a b c | a - b in combination with
the instance head C [a] [b] [b] will yield

C [x] y z improved by y = [b1] for some b1

A similar reasoning yields

C [x] y z improved by z = [b2] for some b2

So, overall we get

C [x] y z improved by y= [b1] and z = [b2]

Unfortunately, we couldn't establish that b1 and b2 are equal.
Hence, we can *not* apply the instance.

2) Alternative design:

We could now argue that the improvement implied by the FD
is only applicable if we are in the right context.

That is,
C [x] y z doesn't yield any improvement because
the context y is still underspecified (doesn't match
the instance).

In case of  C [x] [y] z we get z = [y]
(and C [x] z [y] yields z = [y])


3) Multi-range FDs

Consider

class C a b c | a - b c

instance C a b b = C [a] [b] [b]

This time it's straightforward.

C [x] y z yields the improvement y = [b] and z = [b]
which then allows us to apply the instance.

4) Summary.

With multi-range FDs we can derive more improvement.

C [x] y z   yields y = [b] and z = [b]

Based on the FD-CHR formulation, for the single-range FD case we get

C [x] y z yields y = [b1] and z = [b2]

which is clearly weaker.

The alternative design is even weaker, from C [x] y z we can derive
any improvement.

So, I conclude that in the Haskell type improvement context
there's clearly a difference among single-range and multi-range FDs.

Of course, we could define multi-range FDs in terms of single-range FDs
which then trivially solves the equivalence problem (but some user
may be disappointed that their multi-range FDs yield weaker improvement).

Thanks for your pointers below but I believe that FDs in the Haskell context
can be quite different from FDs in the database context.

- Martin



While you're looking for unusual examples of fundeps, you
might also want to consider things like:

  class C a b c | a - b, b - c

Note that this is subtly different from a - b c because

  {a - b, b - c}  |=  {a - b c}

while the reverse does not hold.  Instead of type classes,
I'll give you some more down-to-earth examples of relations
that satisfy these dependencies:

  {Paper, Conference, Year}
  {Professor, University, Country}
  {Person, FavoriteFood, FoodGroup}
  ...

For further insight, you might want to take a look at the theory
of relational databases to see how functional dependencies are
used there.  In that context, some sets of dependencies (such
as {a - b, b - c}) can be interpreted as indicators of bad
design.  This, in turn, might give you some ideas about the kinds
of dependencies you can expect to encounter in well-designed
Haskell code.  (Of course, you might still find examples in other
Haskell code of dependencies that would make a database person
wince :-)



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] looking for examples of non-full Functional Dependencies

2008-04-17 Thread Martin Sulzmann

Sittampalam, Ganesh wrote:

Martin Sulzmann wrote:
  

Mark P Jones wrote:



  
In fact, the two sets of dependencies that you have given here are 
provably equivalent, so it would be decidedly odd to have a type 
improvement system that distinguishes between them.
  


  

Based on the FD-CHR formulation, for the single-range FD case we
get [...] which is clearly weaker.
[...]
So, I conclude that in the Haskell type improvement context 
there's clearly a difference among single-range and multi-range FDs.



This seems like a flaw in FD-CHR, rather than a fundamental difference 
between the dependencies.


  

Of course, we could define multi-range FDs in terms of single-range FDs
which then trivially solves the equivalence problem (but some user 
may be disappointed that their multi-range FDs yield weaker improvement).



Why not instead transform single-range FDs into multi-range ones where
possible?

  


That's a perfectly reasonable assumption and would establish the logical 
property that


a - b /\ a - c   iff a - b /\ c

for FDs (by definition).

But what about programmers who'd like that

C [x] y z   yields the improvement y = [b], z =[b]

where

class C a b c | a - b c
instance C a b b = C [a] [b] [b]

It's hard to say who's right or wrong but there's a design space which needs
to be explored further.

Martin



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] looking for examples of non-full Functional Dependencies

2008-04-17 Thread Martin Sulzmann

Sittampalam, Ganesh wrote:
Why not instead transform single-range FDs into multi-range ones where 
possible?
  


  

That's a perfectly reasonable assumption and would establish the logical 
property that



  

a - b /\ a - c   iff a - b /\ c



  

for FDs (by definition).



  

But what about programmers who'd like that



  

C [x] y z   yields the improvement y = [b], z =[b]



  

where



  

class C a b c | a - b c
instance C a b b = C [a] [b] [b]



Isn't that precisely what you earlier said would happen with multi-range FDs?
Either I'm missing some difference or we're talking at cross-purposes.

My suggestion is that

class C a b c | a - b c and class C a b c | a - b, a - c be both
treated as the former case, leading to both cases having the y=[b],z=[b]
improvement as above.

  


Misunderstanding. I see what you mean. Yes, I agree let's consider

a - b, a - c as equivalent to a - b c  (I argued the other direction 
in my earlier email).


One subtle point (Tom and I just discussed).

It could happen that the programmer writes

class SuperCtxt = C a b c | a - b

but there could be an implicit dependency a - c
arising from the super class context SuperCtxt.
Hence, you suddenly get a - b c.

The problem is to generate the proper improvement rules.
Well, it's not hard to generate these rules we just need to make
sure that the rules generated match the programmers intuition
how functional dependencies behave.

Martin




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] looking for examples of non-full Functional Dependencies

2008-04-17 Thread Martin Sulzmann

Iavor Diatchki wrote:

Hello,

On Wed, Apr 16, 2008 at 11:06 PM, Martin Sulzmann
[EMAIL PROTECTED] wrote:
  

 3) Multi-range FDs

 Consider

 class C a b c | a - b c

 instance C a b b = C [a] [b] [b]

 This time it's straightforward.

 C [x] y z yields the improvement y = [b] and z = [b]
 which then allows us to apply the instance.



I don't think that this improvement rule is justified (unless there
are some assumptions that are added to the system that go beyond FD?).
  By the way, note that the above example does not have any instances
for C, so lets first add a base case like this:

instance C Char Bool Bool

Now the instances for C are: { C Char Bool Bool, C [Char] [Bool]
[Bool], ... }.  Certainly, if you just consider these instances, then
the improvement rule that you suggest is valid.  However, suppose that
we also add the instance:

instance C [Int] Char Bool

Note that this instance does not violate the FD: if we know the first
argument, then we know exactly what are the other two arguments.  In
this context, it is not OK to improve C [x] y z as you suggest because
'x' may be instantiate to 'Int'
  


There possible points of view here.

Consider  a - b c as a short-hand for a - b, a - c. That's fine.

But we may also want to go beyond (single-range) FDs. That's why I have 
in mind.

Therefore, a - b, a - c is a short-hand for a - b, c.
(At least there's one supporter, Ganesh, assuming that Tom and I are 
neutral :) )


Suppose we encode the  multi-range FD a - b, c as defined in the FD-CHR 
paper.

Then,

class C a b c | a - b c

instance C a b b = C [a] [b] [b]
instance C [Int] Char Bool

leads to an instance improvement/instance improvement conflict,
like in the single-range FD case

class D a b | a - b

instance D a a = D [a] [a]
instance D [Int] Char

All of the above design choices make sense. There's no right or wrong.
But it's wrong to simply ignore possible FD variants which go beyond 
single-range FDs.


Martin




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] looking for examples of non-full Functional Dependencies

2008-04-17 Thread Martin Sulzmann

Iavor Diatchki wrote:

Hello,

On Thu, Apr 17, 2008 at 10:26 AM, Martin Sulzmann
[EMAIL PROTECTED] wrote:
  

 leads to an instance improvement/instance improvement conflict,
 like in the single-range FD case

 class D a b | a - b

 instance D a a = D [a] [a]
 instance D [Int] Char



Sorry to be picky but there is no violation of the FD here.  Note that
the class D has only a single ground instance and to violate an FD you
need at least two.  As in the previous example, we can add an instance
like this:

instance D Char Char

This results in more ground instances: { D [Int] Char, D Char Char, D
[Char] [Char], ... } but again, there is no violation of the FD.

I think that a lot of the confusion in discussions such as this one
(and we've had a few of those :-) stems from the fact that the term
functional dependency seems to have become heavily overloaded.
Often, the basic concept is mixed with (i) concepts related to
checking that the basic concept holds (e.g., various restrictions on
instances, etc), (ii) concepts related to how we might want to use the
basic concept (e.g., what improvement rules to use).  Of course, (i)
and (ii) are very important, and there are a lot possible design
choices.  However, a number of the discussions I have seen go like
this:
  1) In general, it is hard to check if instances violate the stated
functional dependencies.
  2) So we have more restrictive rules, that are easier to check.
  3) These more restrictive rules give us stronger guarantees, so we
have more opportunity for improvement.
While there is nothing inherently wrong with this, it is important to
note that the extra improvement is not a result of the use of FDs but
rather, from the extra restrictions that we placed on the instances.
I think that this distinction is important because (i) it avoids
mixing concepts, and (ii) points to new things that we may want to
consider.  For example, I think that there is an opportunity for
improvement in situations where is class is not exported from a
module.  Then we know the full set of instances for the class, and we
may be able to compute improvement rules.

Hope this helps!
-Iavor

  


Can you pl specify the improvement rules for your interpretation of FDs. 
That would help!


I'm simply following Mark Jones' style FDs.

Mark's ESOP'00 paper has a consistency condition:
If two instances match on the FD domain then the must also match on 
their range.

The motivation for this condition is to avoid inconsistencies when
deriving improvement rules from instances.

For

class D a b | a - b

instance D a a = D [a] [a]
instance D [Int] Char


we get

D [a] b == b =[a]
D [Int] b == b=Char

In case of

D [Int] b we therefore get b=Char *and* b =[a] which leads to a 
(unification) error.

The consistency condition avoids such situations.


The beauty of formalism FDs with CHRs (or type functions/families) is that
the whole improvement process becomes explicit. Of course, it has to match
the programmer's intuition. See the discussion regarding multi-range FDs.

Martin

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] looking for examples of non-full Functional Dependencies

2008-04-16 Thread Martin Sulzmann
We're also looking for (practical) examples of multi-range functional 
dependencies


class C a b c | c - a b

Notice that there are multiple (two) parameters in the range of the FD.

It's tempting to convert the above to

class C a b c | c - a, c - b

but this yields a weaker (in terms of type improvement) system.

Thanks,
Martin



Tom Schrijvers wrote:

Hello,

I'm looking for practical examples of non-full functional dependencies 
and would be grateful if anyone could show me some or point to 
applications using them.


A non-full functional dependency is one involves only part of the 
parameters of a type class. E.g.


class C a b c | a - b

has a non-full functional dependency a - b which does not involve c.

Thanks,

Tom

--
Tom Schrijvers

Department of Computer Science
K.U. Leuven
Celestijnenlaan 200A
B-3001 Heverlee
Belgium

tel: +32 16 327544
e-mail: [EMAIL PROTECTED]
url: http://www.cs.kuleuven.be/~toms/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] type families and type signatures

2008-04-11 Thread Martin Sulzmann

Lennart Augustsson wrote:
On Wed, Apr 9, 2008 at 8:53 AM, Martin Sulzmann 
[EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:



Lennart, you said

(It's also pretty easy to fix the problem.)


What do you mean? Easy to fix the type checker, or easy to fix the
program by inserting annotations
to guide the type checker?

Martin

 
I'm referring to the situation where the type inferred by the type 
checker is illegal for me to put into the program.
In our example we can fix this in two ways, by making foo' illegal 
even when it has no signature, or making foo' legal even when it has a 
signature.


To make it illegal:  If foo' has no type signature, infer a type for 
foo', insert this type as a signature and type check again.  If this 
fails, foo' is illegal.


To make it legal: If foo' with a type signature doesn't type check, 
try to infer a type without a signature.  If this succeeds then check 
that the type that was inferred is alpha-convertible to the original 
signature.  If it is, accept foo'; the signature doesn't add any 
information.


Either of these two option would be much preferable to the current 
(broken) situation where the inferred signature is illegal.


I understand now what you meant. See my earlier reply to Claus' email.

Regarding your request to take into account alpha-convertibility when 
checking signatures.

We know that in GHC the check

(forall a, b. Foo a b = a) = (forall a, c. Foo a c = a)(**)

fails cause when testing the constraint/formula derived from the above 
subsumption check


 forall a, b. Foo a b implies exists c. Foo a c

GHC simply drops the exists c bit and clearly

  Foo a b does NOT imply Foo a c

We need to choose c to be equal to b. I call this process matching but 
it's of course

a form of alpha-conversion.
(We convince GHC to accept such signatures but encoding System F style
type application via redundant type proxy arguments)

The point I want to make is that in Chameleon I've implemented a 
subsumption check
which takes into account matching. Therefore, (**) is checkable in 
Chameleon.
BUT matching can be fairly expensive, exponential in the worst case, for 
example

consider cases such as Foo x_i x_i+1
So maybe there's good reason why GHC's subsumption check doesn't take 
into account matching.


The slightly odd thing is that GHC uses a pessimistic subsumption 
check (no matching) in combination

with an optimistic ambiguity check (see my earlier message on this topic).
I'd say it's better to be pessimistic/pessimistic and 
optimistic/optimistic, maybe this could be

a compiler switch?

Martin

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] type families and type signatures

2008-04-09 Thread Martin Sulzmann
Manuel said earlier that the source of the problem here is foo's 
ambiguous type signature

(I'm switching back to the original, simplified example).
Type checking with ambiguous type signatures is hard because the type 
checker has to guess
types and this guessing step may lead to too many (ambiguous) choices. 
But this doesn't mean

that this worst case scenario always happens.

Consider your example again

type family Id a

type instance Id Int = Int

foo :: Id a - Id a
foo = id

foo' :: Id a - Id a
foo' = foo

The type checking problem for foo' boils down to verifying the formula

forall a. exists b. Id a ~ Id b

Of course for any a we can pick b=a to make the type equation statement 
hold.
Fairly easy here but the point is that the GHC type checker doesn't do 
any guessing
at all. The only option you have (at the moment, there's still lots of 
room for improving

GHC's type checking process) is to provide some hints, for example mimicking
System F style type application by introducing a type proxy argument in 
combination

with lexically scoped type variables.

foo :: a - Id a - Id a
foo _ = id

foo' :: Id a - Id a
foo' = foo (undefined :: a)


Martin


Ganesh Sittampalam wrote:

On Wed, 9 Apr 2008, Manuel M T Chakravarty wrote:


Sittampalam, Ganesh:


No, I meant can't it derive that equality when matching (Id a) 
against (Id b)? As you say, it can't derive (a ~ b) at that point, 
but (Id a ~ Id b) is known, surely?


No, it is not know.  Why do you think it is?


Well, if the types of foo and foo' were forall a . a - a and forall b 
. b - b, I would expect the type-checker to unify a and b in the 
argument position and then discover that this equality made the result 
position unify too. So why can't the same happen but with Id a and Id 
b instead?


The problem is really with foo and its signature, not with any use of 
foo. The function foo is (due to its type) unusable.  Can't you 
change foo?


Here's a cut-down version of my real code. The type family Apply is 
very important because it allows me to write class instances for 
things that might be its first parameter, like Id and Comp SqlExpr 
Maybe, without paying the syntactic overhead of forcing client code to 
use Id/unId and Comp/unComp. It also squishes nested Maybes which is 
important to my application (since SQL doesn't have them).


castNum is the simplest example of a general problem - the whole point 
is to allow clients to write code that is overloaded over the first 
parameter to Apply using primitives like castNum. I'm not really sure 
how I could get away from the ambiguity problem, given that desire.


Cheers,

Ganesh

{-# LANGUAGE TypeFamilies, GADTs, UndecidableInstances, 
NoMonomorphismRestriction #-}


newtype Id a = Id { unId :: a }
newtype Comp f g x = Comp { unComp :: f (g x) }

type family Apply (f :: * - *) a

type instance Apply Id a = a
type instance Apply (Comp f g) a = Apply f (Apply g a)
type instance Apply SqlExpr a = SqlExpr a
type instance Apply Maybe Int = Maybe Int
type instance Apply Maybe Double = Maybe Double
type instance Apply Maybe (Maybe a) = Apply Maybe a

class DoubleToInt s where
   castNum :: Apply s Double - Apply s Int

instance DoubleToInt Id where
   castNum = round

instance DoubleToInt SqlExpr where
   castNum = SECastNum

data SqlExpr a where
  SECastNum :: SqlExpr Double - SqlExpr Int

castNum' :: (DoubleToInt s) = Apply s Double - Apply s Int
castNum' = castNum

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] type families and type signatures

2008-04-09 Thread Martin Sulzmann

I think it's not fair to say TFs as implemented in GHC are broken.
Fine, they are situations where the current implementation is overly 
conservative.


The point is that the GHC type checker relies on automatic inference. 
Hence, there'll

always be cases where certain reasonable type signatures are rejected.

For example, consider the case of undecidable and non-confluent type 
class instances.


instance Foo a = Bar a-- (1)

instance Erk a = Bar [a]  -- (2)


GHC won't accept the above type class instance (note: instances are a 
kind of type signature) because


- instance (1) is potentially non-terminating (the size of the type term 
is not decreasing)
- instance (2) overlaps with (1),  hence, it can happen that during 
context reduction we choose

 the wrong instance.

To conclude, any system with automatic inference will necessary reject 
certain type signatures/instances

in order to guarantee soundness, completeness and termination.

Lennart, you said

(It's also pretty easy to fix the problem.)


What do you mean? Easy to fix the type checker, or easy to fix the 
program by inserting annotations

to guide the type checker?

Martin


Lennart Augustsson wrote:

Let's look at this example from a higher level.

Haskell is a language which allows you to write type signatures for 
functions, and even encourages you to do it.
Sometimes you even have to do it.  Any language feature that stops me 
from writing a type signature is in my opinion broken.
TFs as implemented in currently implemented ghc stops me from writing 
type signatures.  They are thus, in my opinion, broken.


A definition should either be illegal or it should be able to have a 
signature.  I think this is a design goal.  It wasn't true in Haskell 
98, and it's generally agreed that this was a mistake.


To summarize: I don't care if the definition is useless, I want to be 
able to give it a type signature anyway.


(It's also pretty easy to fix the problem.)

  -- Lennart

On Wed, Apr 9, 2008 at 7:20 AM, Martin Sulzmann 
[EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:


Manuel said earlier that the source of the problem here is foo's
ambiguous type signature
(I'm switching back to the original, simplified example).
Type checking with ambiguous type signatures is hard because the
type checker has to guess
types and this guessing step may lead to too many (ambiguous)
choices. But this doesn't mean
that this worst case scenario always happens.

Consider your example again


type family Id a

type instance Id Int = Int

foo :: Id a - Id a
foo = id

foo' :: Id a - Id a
foo' = foo

The type checking problem for foo' boils down to verifying the formula

forall a. exists b. Id a ~ Id b

Of course for any a we can pick b=a to make the type equation
statement hold.
Fairly easy here but the point is that the GHC type checker
doesn't do any guessing
at all. The only option you have (at the moment, there's still
lots of room for improving
GHC's type checking process) is to provide some hints, for example
mimicking
System F style type application by introducing a type proxy
argument in combination
with lexically scoped type variables.

foo :: a - Id a - Id a
foo _ = id

foo' :: Id a - Id a
foo' = foo (undefined :: a)


Martin



Ganesh Sittampalam wrote:

On Wed, 9 Apr 2008, Manuel M T Chakravarty wrote:

Sittampalam, Ganesh:


No, I meant can't it derive that equality when
matching (Id a) against (Id b)? As you say, it can't
derive (a ~ b) at that point, but (Id a ~ Id b) is
known, surely?


No, it is not know.  Why do you think it is?


Well, if the types of foo and foo' were forall a . a - a and
forall b . b - b, I would expect the type-checker to unify a
and b in the argument position and then discover that this
equality made the result position unify too. So why can't the
same happen but with Id a and Id b instead?

The problem is really with foo and its signature, not with
any use of foo. The function foo is (due to its type)
unusable.  Can't you change foo?


Here's a cut-down version of my real code. The type family
Apply is very important because it allows me to write class
instances for things that might be its first parameter, like
Id and Comp SqlExpr Maybe, without paying the syntactic
overhead of forcing client code to use Id/unId and
Comp/unComp. It also squishes nested Maybes which is important
to my application (since SQL doesn't have them).

castNum is the simplest example of a general problem - the
whole point is to allow clients to write code that is
overloaded over the first parameter to Apply using primitives
like castNum. I'm

Re: [Haskell-cafe] type families and type signatures

2008-04-09 Thread Martin Sulzmann

Claus Reinke wrote:

type family Id a

type instance Id Int = Int

foo :: Id a - Id a
foo = id n

foo' :: Id a - Id a
foo' = foo


type function notation is slightly misleading, as it presents
qualified polymorphic types in a form usually reserved for unqualified 
polymorphic types.


rewriting foo's type helped me to see the ambiguity that
others have pointed out:

foo :: r ~ Id a = r - r

there's nowhere to get the 'a' from. so unless 'Id' itself is
fully polymorphic in 'a' (Id a = a, Id a = Int, ..), 'foo' can't
really be used. for each type variable, there needs to be
at least one position where it is not used as a type family
parameter.


Correct because type family constructors are injective only.

[Side note: With FDs you can enforce bijection

class Inj a b | a - b
class Bij a b | a - b, b - a
]


it might be useful to issue an ambiguity warning for such types?

perhaps, with closed type families, one might be able to
reason backwards (is there a unique 'a' such that 'Id a ~ Int'?),
as with bidirectional FDs. but if you want to flatten all
'Maybe'-nests, even that direction isn't unique.


True, type checking with closed type families will rely on some form of
solving and then we're more likely to guess (the right) types.


The type checking problem for foo' boils down to verifying the formula

forall a. exists b. Id a ~ Id b


naively, i'd have expected to run into this instead/first:

(forall a. Id a - Id a) ~ (forall b. Id b - Id b)

which ought to be true modulo alpha conversion, without guesses,
making 'foo'' as valid/useless as 'foo' itself.

where am i going wrong?


You're notion of subsumption is too strong here.
I've been using


 (forall as. C = t) = (forall bs. C' = t')

iff

 forall bs. (C'  implies exist bs. (C /\ t=t'))

where

(forall as. C = t) is the inferred type

(forall bs. C' = t') is the annotated type

Makes sense?


Even if we establish

(forall a. Id a - Id a) ~ (forall b. Id b - Id b)

we'd need to guess a ~ b
(which is of course obvious)


claus

ps. i find these examples and discussions helpful, if often initially
   puzzling - they help to clear up weak spots in the practice
   of type family use, understanding, and implementation,
thereby improving all while making families more familiar!-)

Absolutely!

Martin


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] type families and type signatures

2008-04-09 Thread Martin Sulzmann

Claus Reinke wrote:
The point is that the GHC type checker relies on automatic inference. 
Hence, there'll

always be cases where certain reasonable type signatures are rejected.

..
To conclude, any system with automatic inference will necessary 
reject certain type signatures/instances

in order to guarantee soundness, completeness and termination.


i think Lennart's complaint is mainly about cases where GHC does not 
accept the very type signature it infers itself. all of

which cases i'd consider bugs myself, independent of what
type system feature they are related to.

there are also the related cases of type signatures which cannot be 
expressed in the language but which might

occur behind the scenes. all of these cases i'd consider
language limitations that ought to be removed.

the latter cases also mean that type errors are reported either in a 
syntax that can not be used in programs or in a misleading external 
syntax that does not fully represent the internal type.


in this example, ghci infers a type for foo' that it rejects as a type 
signature for foo'. so, either the inferred type

is inaccurately presented, or there's a gap in the type
system, between inferred and declared types, right?



Good point(s) which I missed earlier.

Type inference (no annotations) is easy but type checking is much harder.
Type checking is not all about pure checking, we also perform some 
inference,

the Hindley/Milner unification variables which arise out of the program text
(we need to satisfy these constraints via the annotation).

To make type checking easier we should impose restrictions on inference.
We (Tom/SimonPJ/Manuel) thought about this possibility. The problem is 
that it's

hard to give an intuitive, declarative description of those restrictions.

Martin



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: type families and type signatures

2008-04-08 Thread Martin Sulzmann
The point is that Mark proposes a *pessimistic* ambiguity check whereas 
Tom (as well as GHC) favors

an *optimistic* ambiguity check.

By pessimistic I mean that we immediately reject a program/type if 
there's a potential

unambiguity. For example,

class Foo a b

forall a b.  Foo a b = b - b

is potentially ambiguous assuming we encounter

instance Foo Int Char
instance Foo Bool Char

But such instances might never arise. See Tom's example below which applies
an optimistic ambiguity check. In the extreme case, the optimistic 
ambiguity check

only checks for ambiguity when calling the ground top-level function main.
At this point (latest), we must provide (unambiguously) evidence
for type classes and type equations.

In summary:

- The pessimistic ambiguity check is more in line with Haskell's open 
world view
(of type classes and type families/functions being open/extensible). 
Anything can
happen in the future  (we might add an new type instances).  So let's 
assume the

worst and immediately report any potential ambiguity.

- The optimistic ambiguity check takes into account the set of available 
instance.
  Depending on the set of instances there may not be any ambiguity 
after all.


Both strategies are backed up by theoretical results. See the Coherence 
Theorems
in Mark's thesis and A Theory of Overloading (I'm happy to provide 
more concrete

pointers if necessary).

Martin


Tom Schrijvers wrote:

Hi Tom,

It seems we are thinking of different things.  I was referring to
the characterization of a type of the form P = t as being ambiguous
if there is a type variable in P that is not determined by the
variables in t; this condition is used in Haskell to establish
coherence (i.e., to show that a program has a well-defined semantics).


[...]


Technically, one could ignore the ambiguous type signature for
bar, because the *principal* type of bar (as opposed to the
*declared type*) is not ambiguous.  However, in practice, there
is little reason to allow the definition of a function with an
ambiguous type because such functions cannot be used in practice:
the ambiguity that is introduced in the type of bar will propagate
to any other function that calls it, either directly or indirectly.
For this reason, it makes sense to report the ambiguity at the
point where bar is defined, instead of deferring that error to
places where it is used, like the definition of bar'.  (The latter
is what I mean by delayed ambiguity checking.)


Thanks for explaining the ambiguity issue, Mark. I wasn't thinking 
about that. We have thought about ambiguity. See Section 7.3 in our paper


http://www.cs.kuleuven.be/~toms/Research/papers/draft_type_functions_2008.pdf 



Note that neither Definition 3 nor Definition 4 demands that all 
unification variables are substituted with ground types during type 
checking. So we do allow for a form of ambiguity: when any type is 
valid (this has no impact on the semantics). Consider the initial program


type family F a

foo :: F a - F a
foo = id

You propose to reject function Foo because it cannot be used 
unambiguously. We propose to accept foo, because the program could be 
extended with


type instance F x = Int

and, for instance, this would be valid:

foo2 :: F Char - F Bool
foo2 = foo

If you look at the level of the equality constraints:

(F Char - F Bool) ~ (F alpha - F alpha)

we normalise first wrt the instance F x = Int, and get

(Int - Int) ~ (Int - Int)

which is trivially true. In this process we do not substitute alpha. 
So alpha is ambiguous, but any solution will do and not have an impact 
on program execution. GHC already did this before type functions, for 
this kind of ambiguity, it substitutes alpha for an arbitrary type. 
That's not unreasonable, is it?


Cheers,

Tom

--
Tom Schrijvers

Department of Computer Science
K.U. Leuven
Celestijnenlaan 200A
B-3001 Heverlee
Belgium

tel: +32 16 327544
e-mail: [EMAIL PROTECTED]
url: http://www.cs.kuleuven.be/~toms/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] STAMP benchmark in Haskell?

2008-03-03 Thread Martin Sulzmann

Some Haskell-STM benchmarks can be found here:

Dissecting Transactional Executions in Haskell
Cristian Perfumo et al
http://www.cs.rochester.edu/meetings/TRANSACT07/

Martin


-Willem Maessen writes:
  
  On Mar 1, 2008, at 6:41 PM, Tom Davies wrote:
  
   I'm experimenting with STM (in CAL[1] rather than Haskell)
   and want to run the STAMP[2] benchmarks.
  
  Hmm, I don'tknow of a particularly good STM-in-Haskell benchmark, but  
  I'd say that the STAMP benchmarks are written in a rather imperative,  
  object-oriented style.  You wouldn't get very meaningful data about  
  anything if you were to naively translate them to Haskell; you'd  
  instead have to rewrite them completely (at which point head-to-head  
  comparisons are difficult).
  
   Is there a Haskell translation available, or can anyone
   suggest a better/different benchmark suite for STM?
  
  Good question.  Because we tend to eschew mutable state in Haskell,  
  I'd expect the characteristics of such an application to be *very*  
  different.
  
  -Jan-Willem Maessen
  
  
  
   Thanks,
Tom
  
   [1] http://openquark.org/Open_Quark/Welcome.html
   [2] http://stamp.stanford.edu/
  
   ___
   Haskell-Cafe mailing list
   Haskell-Cafe@haskell.org
   http://www.haskell.org/mailman/listinfo/haskell-cafe
  
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Missing join and split

2008-01-01 Thread Martin Sulzmann
Josef Svenningsson writes:
  On Dec 28, 2007 11:40 PM, Mitar [EMAIL PROTECTED] wrote:
   Would not it be interesting and useful (but not really efficient) to
   have patterns something like:
  
   foo :: Eq a = a - ...
   foo (_{4}'b') = ...
  
   which would match a list with four elements ending with an element 'b'. Or:
  
   foo (_+';'_+';'_) = ...
  
   which would match a list with embedded two ';' elements. (Last _
   matched the remaining of the list.)
  
  I suggest you take at look at HaRP, Haskell Regular Patterns:
  http://www.cs.chalmers.se/~d00nibro/harp/
  
  It hasn't been updated for a while but it should still be useable.
  

Also of interest might be XHaskell
http://taichi.ddns.comp.nus.edu.sg/taichiwiki/XhaskellHomePage
which adds XDuce style regular expression pattern matching to Haskell.

Martin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] SingHaskell slides

2007-12-05 Thread Martin Sulzmann

Slides (in pdf) are now available online:
http://taichi.ddns.comp.nus.edu.sg/taichiwiki/SingHaskell2007
http://www.comp.nus.edu.sg/~sulzmann/singhaskell07/index.html

- Tom  Martin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Type-level arithmetic

2007-10-21 Thread Martin Sulzmann
Manuel M T Chakravarty writes:
  Ross Paterson wrote,
   On Tue, Oct 16, 2007 at 10:56:27AM +1000, Manuel M T Chakravarty wrote:
   Lennart Augustsson wrote,
   And Haskell embedded a logical programming language on accident.
   Well, we are just trying to fix that :)
   
   Since types are inferred using unification, and classes are still present,
   adding functions yields a functional logic programming language at the
   type level.
  
  I used to agree with that, but I changed my opinion.  Or more 
  precisely, I'd say that it is a very weak functional logic language. 
Weak in the sense that it's logic component is essential nil.
  
  Let me justify this statement.  We have type variables that are like 
  logical variables in logic programming languages.  However, we 
  never use function definitions to guess values for type variables. 
  Instead, function evaluation simplify suspends when it depends on an 
  uninstantiated variable and resumes when this variable becomes 
  instantiated.  (The functional logic people call this evaluation 
  strategy residuation.)  For example, if we have
  
 type family T a
 type instance T Int = Char
  
  then, given (T a ~ Char), we do *not* execute T backwards and infer 
  (a = Int).  Instead, we leave (T a ~ Char) as it is and evaluate 'T' 
  only when 'a' becomes fixed.
  

Explained slightly differently.

The above type function is open (hence, we only apply matching)
whereas in functional logic programming type functions are closed
(therefore, we use unification/residuation).

Such an approach fits well together with open type classes as found
in Haskell.

Martin



  There have been proposals for truely functional logic languages 
  using residuation, but they include support for backtracking and 
  producing multiple solutions to a single query.  We support neither 
  on the type level.
  
  So, the only interaction between type functions and unification left 
  is that function evaluation suspends on uninstantiated type 
  variables and resumes when they become instantiated.  This is not 
  functional logic programming, it is *concurrent* functional 
  programming with single-assignment variables.  In fact, languages 
  such as Id and pH, which are routinely characterised as concurrent 
  functional, use exactly this model.
  
  I don't think the presence of type classes *without* functional 
  dependencies changes this.
  
  Manuel
  
  PS: You get a *real* functional logic language by truly performing
   unification modulo an equational theory.  This leads to the
   concept of E-unification and, in our constructor-based context,
   to narrowing as an evaluation strategy.
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] [Fwd: undecidable overlapping instances: a bug?]

2007-10-21 Thread Martin Sulzmann
Iavor Diatchki writes:
  Hello,
  
  On 10/19/07, Martin Sulzmann [EMAIL PROTECTED] wrote:
   Simon Peyton-Jones writes:
 ...
 Like you, Iavor, I find it very hard to internalise just why (B) and 
   (C) are important.  But I believe the paper gives examples of why they 
   are, and Martin is getting good at explaining it. Martin: can you give an 
   example, once more, of the importance of (B) (=fullness)?

  
   Fullness (B) is a necessary condition to guarantee that the constraint
   solver (aka CHR solver) derived from the type class program is confluent.
  
   Here's an example (taken from the paper).
  
 class F a b c | a-b
 instance F Int Bool Char
 instance F a b Bool = F [a] [b] Bool
  
   The FD is not full because the class parameter c is not involved in
   the FD. We will show now that the CHR solver is not confluent.
  
   Here is the translation to CHRs (see the paper for details)
  
 rule F a b1 c, F a b2 d  == b1=b2  -- (FD)
 rule F Int Bool Char== True   -- (Inst1)
 rule F Int a b   == a=Bool -- (Imp1)
 rule F [a] [b] Bool == F a b Bool -- (Inst2)
 rule F [a] c d   == c=[b]  -- (Imp2)
  
  
   The above CHRs are not confluent. For example,
   there are two possible CHR derivations for the initial
   constraint store F [a] [b] Bool, F [a] b2 d
  
   F [a] [b] Bool, F [a] b2 d
   --_FD (means apply the FD rule)
   F [a] [b] Bool, F [a] [b] d , b2=[b]
   -- Inst2
   F a b Bool, F [a] [b] d , b_2=[b] (*)
  
  
   Here's the second CHR derivation
  
   F [a] [b] Bool, F [a] b2 d
   --_Inst2
   F a b Bool, F [a] b2 d
   --_Imp2
   F a b Bool, F [a] [c] d, b2=[c]   (**)
  
  
   (*) and (**) are final stores (ie no further CHR are applicable).
   Unfortunately, they are not logically equivalent (one store says
   b2=[b] whereas the other store says b2=[c]).
  
  But what is wrong with applying the following logical reasoning:
  

Well, you apply the above CHRs from left to right *and* right to
left. In contrast, I apply the CHRs only from left to right.

  Starting with (**):
  F a b Bool, F [a] [c] d, b2=[c]
  (by inst2)
  F a b Bool, F [a] [c] d, b2=[c], F [a] [b] Bool
  (by FD)
  F a b Bool, F [a] [c] d, b2=[c], F [a] [b] Bool, [c] = [b]
  (applying equalities and omitting unnecessary predicates)
  F [a] [b] Bool, F [a] b2 d
  (...and then follow your argument to reach (*)...)
  
  Alternatively, if we start at (*):
  F a b Bool, F [a] [b] d , b_2=[b]
  (by inst2)
  F a b Bool, F [a] [b] d , b_2=[b], F [a] [b] Bool
  (applying equalities, rearranging, and omitting unnecessary predicates)
  F [a] [b] Bool, F [a] b_2 d
  (... and then follow you reasoning to reach (**) ...)
  
  So it would appear that the two sets of predicates are logically equivalent.
  

I agree that the two sets

F a b Bool, F [a] [b] d , b_2=[b] (*)

and 

F a b Bool, F [a] [c] d, b2=[c]   (**)

are logically equivalent wrt the above CHRs (see your proof).

The problem/challenge we are facing is to come up with a confluent and
terminating CHR solver. Why is this useful?
(1) We get a deterministic solver
(2) We can decide whether two constraints C1 and C2 are equal by
solving (a) C1 --* D1 and
(b) C2 --* D2
and then checking that D1 and D2 are equivalent.
The equivalence is a simple syntactic check here.

The CHR solving strategy applies rules in a fixed order (from left to
right). My example shows that under such a strategy the CHR solver
becomes non-confluent for type class programs with non-full FDs.

We can show non-confluence by having two derivations starting with the
same initial store leading to different final stores.

Recall:

 F [a] [b] Bool, F [a] b2 d
--* F a b Bool, F [a] [b] d , b_2=[b] (*)


 F [a] [b] Bool, F [a] b2 d
--* F a b Bool, F [a] [c] d, b2=[c]   (**)

I said

   (*) and (**) are final stores (ie no further CHR are applicable).
   Unfortunately, they are not logically equivalent (one store says
   b2=[b] whereas the other store says b2=[c]).

More precisely, I meant here that (*) and (**) are not logically
equivalent *not* taking into account the above CHRs.
This means that (*) and (**) are different (syntactically). 

   To conclude, fullness is a necessary condition to establish confluence
   of the CHR solver. Confluence is vital to guarantee completeness of
   type inference.
  
  
   I don't think that fullness is an onerous condition.
  
  I agree with you that, in practice, many classes probably use full
  FDs.  However, these extra conditions make the system more
  complicated, and we should make clear what exactly are we getting in
  return for the added complexity.
  

You can get rid of non-full FDs (see the paper). BTW, type functions
are full by construction. 

Martin


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman

[Haskell-cafe] Re: [Haskell] [Fwd: undecidable overlapping instances: a bug?]

2007-10-21 Thread Martin Sulzmann
Mark P Jones writes:
  Hi All,
  
  Here are my responses to the recent messages, starting with some
  summary comments:
  
  - I agree with Martin that the condition I posted a few days ago is
 equivalent to the *refined* weak coverage condition in your
 paper.  The refined tag here is important---I missed it the
 first time and thought he was referring to the WCC at the start of
 Section 6.1, and I would strongly recommend against using that
 (unrefined) version.  I confess that I haven't read the paper
 carefully enough to understand why you included that first WCC at
 all; I don't see any reason why you would want to use that
 particular condition in a practical implementation, and I don't
 see any advantages to it from a theoretical perspective either.
 (Maybe you just used it as a stepping stone to help motivate the
 refined WCC?)
  

It's a stepping stone but WCC on it's own is useful to recover
termination (see the zip example in the paper).

  - I believe that the refined WCC (or the CC or even the original WCC
 if you really want) are sufficient to ensure soundness.  I don't
 agree that the (B) restriction to full FDs is necessary.
  
  - I think it is dangerous to tie up soundness with questions about
 termination.  The former is a semantic property while the latter
 has to do with computability.  Or, from a different perspective,
 soundness is essential, while termination/decidability is only
 desirable.  I know that others have thought more carefully than
 I have about conditions on the form of class and instance
 declarations that will guarantee decidability and I don't have
 much to say on that topic in what follows.  However, I will
 suggest that one should start by ensuring soundness and then,
 *separately*, consider termination.  (Of course, it's perfectly
 ok---just not essential---if conditions used to ensure soundness
 can also be used to guarantee termination.)
  


Well, decidability and completeness matters if we care about automatic
type inference.

Given some input program we'd like that to have a *decidable* algorithm
which computes a type for every well-typed program (*soundness*) and
yields failure if the program is ill-typed (*completeness*).

In Simplifying and Improving Qualified Types. you give indeed
a sound, decidable and complete type inference algorithm
*if* the proof theory among constraints in some theory of qualified
types is sound, decidable and complete.

My main concern is to establish sufficient conditions such that the 
proof theory is sound, decidable and complete. That is, there's a
decidable algorithm which says yes if QT |= C1 - C2 and otherwise no
where QT |= C1 - C2 means that in the qualified theory QT constraint
C1 entails constraint C2.

I think that our works are fairly complementary. Or are you saying
you've already given such an algorithm (which I indeed must have
missed).

I believe that you are offended by my statement that 

  Confluence is vital to guarantee completeness of type inference.

Let me clarify. Confluence is one of the suffient conditions to
guarantee completeness of CHR-based type inference. Apologies for any
confusion caused by my previous statement.

There are clearly other approaches possible to obtain a sound,
decidable and complete proof theory (and therefore to obtain sound,
decidable and complete type inference). But I yet need to see concrete
results which match the CHR-based approach/results described in:

Understanding Functional Dependencies via Constraint Handling Rules
Martin Sulzmann, Gregory J. Duck, Simon Peyton-Jones and Peter
J. Stuckey
In Journal of Functional Programming, 2007

Martin


  Stop reading now unless you're really interested in the details!
  I'm going to start by explaining some key (but hopefully
  unsurprising) ideas before I respond in detail to the example
  that Martin posted.
  
  Interpreting Class, Instance, and Dependency Decls:
  ---
  I'll start by observing that Haskell's class, instance, and
  functional dependency declarations all map directly to formulas in
  standard predicate logic.  The correspondence is straightforward, so
  I'll just illustrate it with some examples:
  
 Declaration   Formula
 ---   ---
 class Eq a = Ord a   forall a. Ord a = Eq a
 instance Eq a = Eq [a]   forall a. Eq a = Eq [a]
 class C a b | a - b  forall a, b. C a b /\ C a c = b = c
  
  This correspondence between declarations and formulas seems to be
  very similar to what you did in the paper using CHRs.  I haven't
  read the paper carefully enough to know what extra mileage and/or
  problems you get by using CHRs.  To me, predicate logic seems more
  natural, but I don't think that matters here---I just wanted to
  acknowledge that essentially the same idea is in your paper.
  
  In the following

[Haskell-cafe] RE: [Haskell] [Fwd: undecidable overlapping instances: a bug?]

2007-10-19 Thread Martin Sulzmann
Simon Peyton-Jones writes:
  | I believe that this weak coverage condition (which is also called
  | the dependency condition somewhere on the wiki) is exactly what GHC
  | 6.4 used to implement but than in 6.6 this changed.  According to
  | Simon's comments on the trac ticket, this rule requires FDs to be
  | full to preserve the confluence of the system that is described in
  | the Understanding Fds via CHRs paper.  I have looked at the paper
  | many times, and I am very unclear on this point, any enlightenment
  | would be most appreciated.
  
  Right.  In the language of http://hackage.haskell.org/trac/ghc/ticket/1241, 
  last comment (date 10/17/07), it would be *great* to
  
  replace the Coverage Condition (CC)
  with the Weak Coverage Condition (WCC)
  
  as Mark suggests.  BUT to retain soundness we can only replace CC with
  WCC + B
  WCC alone without (B) threatens soundness.  To retain termination as well 
  (surely desirable) we must have
  WCC + B + C
  
  (You'll have to look at the ticket to see what B,C are.)
  
  And since A+B+C are somewhat onerous, we probably want to have
  CC  or  (WCC + B + C)
  
  
  At least we know of nothing else weaker that will do (apart from CC of 
  course).
  
  
  Like you, Iavor, I find it very hard to internalise just why (B) and (C) are 
  important.  But I believe the paper gives examples of why they are, and 
  Martin is getting good at explaining it. Martin: can you give an example, 
  once more, of the importance of (B) (=fullness)?
  

Fullness (B) is a necessary condition to guarantee that the constraint
solver (aka CHR solver) derived from the type class program is confluent.

Here's an example (taken from the paper).

  class F a b c | a-b
  instance F Int Bool Char
  instance F a b Bool = F [a] [b] Bool

The FD is not full because the class parameter c is not involved in
the FD. We will show now that the CHR solver is not confluent.

Here is the translation to CHRs (see the paper for details)

  rule F a b1 c, F a b2 d  == b1=b2  -- (FD)
  rule F Int Bool Char== True   -- (Inst1)
  rule F Int a b   == a=Bool -- (Imp1)
  rule F [a] [b] Bool == F a b Bool -- (Inst2)
  rule F [a] c d   == c=[b]  -- (Imp2)  


The above CHRs are not confluent. For example,
there are two possible CHR derivations for the initial
constraint store F [a] [b] Bool, F [a] b2 d

F [a] [b] Bool, F [a] b2 d 
--_FD (means apply the FD rule)
F [a] [b] Bool, F [a] [b] d , b2=[b]
-- Inst2 
F a b Bool, F [a] [b] d , b_2=[b] (*)


Here's the second CHR derivation

F [a] [b] Bool, F [a] b2 d 
--_Inst2 
F a b Bool, F [a] b2 d 
--_Imp2 
F a b Bool, F [a] [c] d, b2=[c]   (**) 


(*) and (**) are final stores (ie no further CHR are applicable).
Unfortunately, they are not logically equivalent (one store says
b2=[b] whereas the other store says b2=[c]).

To conclude, fullness is a necessary condition to establish confluence
of the CHR solver. Confluence is vital to guarantee completeness of
type inference.


I don't think that fullness is an onerous condition.

Martin

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] [Fwd: undecidable overlapping instances: a bug?]

2007-10-18 Thread Martin Sulzmann
Mark P Jones writes:
  [Sorry, I guess this should have been in the cafe ...]
  
  Simon Peyton-Jones wrote:
   The trouble is that
   a) the coverage condition ensures that everything is well behaved
   b) but it's too restrictive for some uses of FDs, notably the MTL library
   c) there are many possibilities for more generous conditions, but
   the useful ones all seem complicated
   
   Concerning the last point I've dumped the current brand leader
   for (c) into http://hackage.haskell.org/trac/ghc/ticket/1241#comment:15.
   
   Better ideas for (c) would be welcome.
  
  Let's take the declaration:  instance P = C t where ...
  The version of the coverage condition in my paper [1] requires
  that TV(t_Y) \subseteq TV(t_X), for each dependency (X-Y) \in F_C.
  (I'm using the notation from the paper; let me know if you need more
  help to parse it.)  This formulation is simple and sound, but it
  doesn't use any dependency information that could be extracted from P.
  To remedy this, calculate L = F_P, the set of functional dependencies
  induced by P, and then expand the right hand side of the set inequality
  above by taking the closure of TV(t_X) with respect to L.  In symbols,
  we have to check that:
  
 TV(t_Y) \subseteq TV(t_X)^+_L, for each (X-Y) \in F_C.
  
  I believe (but haven't formally proved) that this is sound; I don't
  know how to make a more general coverage condition without losing
  that.  I don't know if it's sufficient for examples like MTL (because
  I'm not sure where to look for details of what that requires), but
  if it isn't then I'd be very suspicious ...
  
  All the best,
  Mark
  
  [1] http://www.cs.pdx.edu/~mpj/pubs/fundeps-esop2000.pdf


I think the above is equivalent to the (refined) weak coverage
condition in [2] (see Section 6, p26). The weak coverage condition
give us soundness. More precisely, we obtain confluence from which we
can derive consistency (which in turn guarantees that the type class
program is sound).

*BUT* this only works if we have termination which is often very
tricky to establish.

For the example,

 class Concrete a b | a - b where
 bar :: a - String
 
 instance (Show a) = Concrete a b

termination holds, but the weak coverage condition does *not*
hold. Indeed, this program should be therefore rejected.

Martin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] [Fwd: undecidable overlapping instances: a bug?]

2007-10-18 Thread Martin Sulzmann

Sorry, forgot to add

[2]
http://www.comp.nus.edu.sg/~sulzmann/publications/jfp-fds-revised.pdf

Martin


Martin Sulzmann writes:
  Mark P Jones writes:
[Sorry, I guess this should have been in the cafe ...]

Simon Peyton-Jones wrote:
 The trouble is that
 a) the coverage condition ensures that everything is well behaved
 b) but it's too restrictive for some uses of FDs, notably the MTL 
  library
 c) there are many possibilities for more generous conditions, but
 the useful ones all seem complicated
 
 Concerning the last point I've dumped the current brand leader
 for (c) into http://hackage.haskell.org/trac/ghc/ticket/1241#comment:15.
 
 Better ideas for (c) would be welcome.

Let's take the declaration:  instance P = C t where ...
The version of the coverage condition in my paper [1] requires
that TV(t_Y) \subseteq TV(t_X), for each dependency (X-Y) \in F_C.
(I'm using the notation from the paper; let me know if you need more
help to parse it.)  This formulation is simple and sound, but it
doesn't use any dependency information that could be extracted from P.
To remedy this, calculate L = F_P, the set of functional dependencies
induced by P, and then expand the right hand side of the set inequality
above by taking the closure of TV(t_X) with respect to L.  In symbols,
we have to check that:

   TV(t_Y) \subseteq TV(t_X)^+_L, for each (X-Y) \in F_C.

I believe (but haven't formally proved) that this is sound; I don't
know how to make a more general coverage condition without losing
that.  I don't know if it's sufficient for examples like MTL (because
I'm not sure where to look for details of what that requires), but
if it isn't then I'd be very suspicious ...

All the best,
Mark

[1] http://www.cs.pdx.edu/~mpj/pubs/fundeps-esop2000.pdf
  
  
  I think the above is equivalent to the (refined) weak coverage
  condition in [2] (see Section 6, p26). The weak coverage condition
  give us soundness. More precisely, we obtain confluence from which we
  can derive consistency (which in turn guarantees that the type class
  program is sound).
  
  *BUT* this only works if we have termination which is often very
  tricky to establish.
  
  For the example,
  
   class Concrete a b | a - b where
   bar :: a - String
   
   instance (Show a) = Concrete a b
  
  termination holds, but the weak coverage condition does *not*
  hold. Indeed, this program should be therefore rejected.
  
  Martin
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Typeclass vs. Prolog programming

2006-10-02 Thread Martin Sulzmann


[EMAIL PROTECTED] writes:
  
  There is a great temptation to read the following declarations
  
   class Pred a b
   instance (Ord b, Eq b) = Pred [Int] b
   instance Pred Bool Bool
  
  as a Prolog program:
  
   pred([int],B) - ord(B),eq(B).
   pred(bool,bool).
  
  (In Prolog, the names of variables are capitalized and the names of
  constants begin with a lower-case letter. In Haskell type expressions,
  it's the other way around).
  

Why not simply say that type classes have a natural interpretation
in terms of Constraint Handling Rules (CHRs). For example,
the above instances translate to

rule  Pred [Int] == Ord b, Eq b
rule  Pred Bool Bool == True

CHRs specify rewrite rules among constraints (from left to right).
A CHRs fires if we find a constraint which matches the left hand
side (compare this to Prolog where we use unification).

Why CHRs are great for reasoning about type classes is well-documented
here:
http://www.comp.nus.edu.sg/~sulzmann/research/node4.html

Martin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: [Haskell-cafe] Re: coherence when overlapping?

2006-04-16 Thread Martin Sulzmann

Coherence may also arise because of an ambiguous type.
Here's the classic example.

   class Read a where read :: String - a
   class Show a where show :: a - String

   f s = show (read s)

f has type String-String, therefore we can pick
some arbitrary Read/Show classes.

If you want to know more about coherence/ambiguity in the Haskell context.
Check out

@TechReport{jones:coherence,
  author =   M. P. Jones,
  title =Coherence for qualified types,
  institution =  Yale University, Department of Computer Science,
  year = 1993,
  month  =   September,
  type = Research Report,
  number =   YALEU/DCS/RR-989
}

and

@Article{overloading-journal,
  author =   {P.~J.~Stuckey and M.~Sulzmann },
  title ={A Theory of Overloading},
  journal =  {ACM Transactions on Programming Languages and Systems 
(TOPLAS)},
  publisher = ACM Press,
  year = 2005,
  pages = 1-54,
  volume = 27,
  number = 6,
  preprint =
{http://www.comp.nus.edu.sg/~sulzmann/chr/download/theory-journal.ps.gz}}


As far as I know, the term coherence was coined by

@article{breazu-tannen-etal:inhertiance-coercion,
author =  V. Breazu{-}Tannen and T. Coquand and C. Gunter
   and A. Scedrov,
title =   Inheritance as Implicit Coercion,
journal = Information and Computation,
volume =  93,
number =  1,
month =   jul,
year =1991,
pages =   172--221
}

Martin


william kim writes:
  Thank you Martin.
  
  Coherence (roughly) means that the program's semantics is independent
  of the program's typing.
  
  In case of your example below, I could type the program
  either use the first or the second instance (assuming
  g has type Int-Int). That's clearly bound.
  
  If g has type Int-Int, it is not hard to say the first instance should 
  apply.
  But how about g having a polymorphic type? In this case it seems to me 
  choosing the second instance is an acceptable choice as that is the only 
  applicable one at the moment. What is the definition of a coherent 
  behaviour here? Or is there one?
  
  
  Non-overlapping instances are necessary but not sufficient to
  obtain coherence. We also need that types/programs are unambiguous.
  
  Do you therefore imply that coherence is not defined without the 
  non-overlapping assumption?
  
  --william
  
  _
  Get MSN Hotmail alerts on your mobile. 
  http://mobile.msn.com/ac.aspx?cid=uuhp_hotmail
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: coherence when overlapping?

2006-04-13 Thread Martin Sulzmann

Coherence (roughly) means that the program's semantics is independent
of the program's typing.

In case of your example below, I could type the program
either use the first or the second instance (assuming
g has type Int-Int). That's clearly bound.

Guard constraints enforce that instances are non-overlapping.

instance C Int
instance C a | a =!= Int

The second instance can only fire if a is different from Int.

Non-overlapping instances are necessary but not sufficient to
obtain coherence. We also need that types/programs are unambiguous.

Martin


william kim writes:
  Thank you oleg.
  
  Sulzmann et al use guards in CHR to turn overlapping heads (instances) into 
  non-overlapping. Their coherence theorem still assumes non-overlapping.
  
  I agree that what you described is the desirable behaviour when overlapping, 
  that is to defer the decision when multiple instances match. However, why 
  this is coined as coherence? What is the definition of coherence in this 
  case?
  
  class C a where
 f::a - a
  instance C Int where
 f x = x+1
  instance C a where
 f x = x
  
  g x = f x
  
  In a program like this, how does a coherence theorem rules out the 
  incoherent behaviour of early committing the f to the second instance?
  
  I am very confused. Please help.
  
  --william
  
  From: [EMAIL PROTECTED]
  Reply-To: [EMAIL PROTECTED]
  To: [EMAIL PROTECTED], haskell-cafe@haskell.org
  Subject: Re: coherence when overlapping?
  Date: 13 Apr 2006 03:46:38 -
  
But I am still confused by the exact definition of coherence in the case 
  of
overlapping. Does the standard coherence theorem apply? If yes, how?
If no, is there a theorem?
  
  Yes, the is, by Martin Sulzmann et al, the Theory of overloading (the
  journal version)
  http://www.comp.nus.edu.sg/~sulzmann/ms_bib.html#overloading-journal
  
  A simple intuition is this: instance selection may produce more than
  one candidate instance. Having inferred a polymorphic type with
  constraints, the compiler checks to see of some of the constraints can
  be reduced. If an instance selection produces no candidate instances,
  the typechecking failure is reported. If there is exactly one
  candidate instance, it is selected and the constraint is removed
  because it is resolved.  An instance selection may produce more then
  one candidate instance. Those candidate instances may be incomparable:
  for example, given the constraint C a and instances C Int and C
  Bool, both instances are candidates. If such is the case, the
  resolution of that constraint is deferred and it `floats out', to be
  incorporated into the type of the parent expression, etc. In the
  presence of overlapping instances, the multiple candidate instances
  may be comparable, e.g. C a and C Int.  The compiler then checks
  to see if the target type is at least as specific as the most specific
  of those candidate instances. If so, the constraint is reduced;
  otherwise, it is deferred.  Eventually enough type information is
  available to reduce all constraints (or else it is a type error).
  
  _
  Find just what you are after with the more precise, more powerful new MSN 
  Search. http://search.msn.com.sg/ Try it now.
  
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: coherence when overlapping?

2006-04-13 Thread Martin Sulzmann

I believe that GHC's overlapping instance extensions
effectively uses inequalities.

Why do you think that 'inequalities' model 'best-fit'?

instance C Int  -- (1)
instance C a-- (2)

under a 'best-fit' instance reduction strategy
we would resolve C a by using (2).

'best-fit' should be very easy to implement.
Simply order instances (resulting CHRs) in an appropriate
'best-fit' order.

In case of

instance C Int   
instance a =!=Int | C a(2')

we can't reduce C a (because we can't satisfy a=!=Int)

Notice that (2') translates to

rule C a | a =!=Int == True

I think it's better to write a =!=Int not as part of the instance
context but write it as a guard constraint.

I don't think there's any issue for an implementation (either using
'best-fit' or explicit inequalities). The hard part is to establish
inference properties such as completeness etc.

Martin


Tom Schrijvers writes:
  On Thu, 13 Apr 2006, Martin Sulzmann wrote:
  
  
   Coherence (roughly) means that the program's semantics is independent
   of the program's typing.
  
   In case of your example below, I could type the program
   either use the first or the second instance (assuming
   g has type Int-Int). That's clearly bound.
  
   Guard constraints enforce that instances are non-overlapping.
  
   instance C Int
   instance C a | a =!= Int
  
   The second instance can only fire if a is different from Int.
  
   Non-overlapping instances are necessary but not sufficient to
   obtain coherence. We also need that types/programs are unambiguous.
  
  Claus Reinke was discussing this with me some time ago. He called it the 
  best fit principle, which would in a way, as you illustrate above, allow
  inequality constraints to the instance head. You could even write it like:
  
   instance (a /= Int) = C a
  
  as you would do with the superclass constraints... I wonder whether 
  explicit inequality constraints would be useful on their own in all the 
  places where type class and equality constraints are used (class and 
  instance declarations, GADTs, ...). Or maybe it opens a whole new can of 
  worms :)
  
  Tom
  
  --
  Tom Schrijvers
  
  Department of Computer Science
  K.U. Leuven
  Celestijnenlaan 200A
  B-3001 Heverlee
  Belgium
  
  tel: +32 16 327544
  e-mail: [EMAIL PROTECTED]
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] (Un)termination of overloading resolution

2006-02-27 Thread Martin Sulzmann

I was talking about *static* termination. Hence, the conditions
in the paper and the new one I proposed are of course incomplete.
I think that's your worry, isn't it? There are reasonable 
type-level programs which are rejected but will terminate for certain
goals. 

I think what you'd like is that each instance specifies its own
termination condition which can then be checked dynamically.
Possible but I haven't thought much about it. The simplest and most
efficient strategy seems to stop after n number of steps.

Martin


Roman Leshchinskiy writes:
  On Mon, 27 Feb 2006, Martin Sulzmann wrote:
 In case we have an n-ary type function T
 (or (n+1)-ary type class constraint T)
 the conditions says
 for each

 type T t1 ... tn = t

 (or rule T t1 ... tn x == t)

 then rank(ti)  rank(t) for each i=1,..,n
   
I'm probably misunderstanding something but doesn't this imply that we
cannot declare any instances for
   
  class C a b | a - b, b - a
   
which do not break the bound variable condition? This would remove one
of the main advantages fundeps have over associated types.
   
  
   Sure you can. For example,
  
   class C a b | a-b, b-a
   instance C [a] [a]
  
  Ah, sorry, my question was very poorly worded. What I meant to say was 
  that there are no instances declarations for C which satisfy your rule 
  above and, hence, all instances of C (or of any other class with 
  bidirectional dependencies) must satisfy the other, more restrictive 
  conditions. Is that correct?
  
   In your example below, you are not only breaking the Bound Variable
   Condition, but you are also breaking the Coverage Condition.
  
  Yes, but I'm breaking the rule you suggested only once :-) It was only 
  intended as a cute example. My worry, however, is that there are many 
  useful type-level programs similar to my example which are guaranteed to 
  terminate but which nevertheless do not satisfy the rules in your paper or 
  the one you suggested here. I think ruling those out is unavoidable if you 
  want to specify termination rules which every instance must satisfy 
  individually. But why not specify rules for sets of instances instead? 
  This is, for instance, what some theorem provers do for recursive 
  functions and it allows them to handle a wide range of those without 
  giving up decidability.
  
  Roman
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Associated Type Synonyms question

2006-02-21 Thread Martin Sulzmann
Claus Reinke writes:
  The main argument for ATS is that the extra parameter for the
  functionally dependend type disappears, but as you say, that
  should be codeable in FDs. I say should be, because that does
  not seem to be the case at the moment.
   
  My approach for trying the encoding was slightly different from
  your's, but also ran into trouble with implementations.
  
  First, I think you need a per-class association, so your T a b
  would be specific to C. Second, I'd use a superclass context
  to model the necessity of providing an associated type, and
  instance contexts to model the provision of such a type. No
  big difference, but it seems closer to the intension of ATS:
  associated types translate into type association constraints.
  
  (a lot like calling an auxiliary function with empty accumulator,
  to hide the extra parameter from the external interface)
  
   Example
   
   -- ATS
   class C a where
type T a
m :: a-T a
   instance C Int where
type T Int = Int
m _ = 1
  
  -- alternative FD encoding attempt
  
  class CT a b | a - b
  instance CT Int Int
  
  class CT a b = C a where
  m :: a- b
  
  instance CT Int b = C Int where 
  m _ = 1::b
  

Hm, I haven't thought about this. Two comments.
You use scoped variables in class declarations.
Are they available in ghc?

How do you encode?

--AT
instance C a = C [a] where
  type T [a] = [T a]
  m xs = map m xs

Via the following I guess?

instance CT a b = CT a [b]
instance C a = C [a] where
  m xs = map m xs

It seems your solution won't suffer from
the problem I face. See below.

   -- FD encoding
   class T a b | a-b 
   instance T Int Int
   
   class C a where
m :: T a b = a-b
   
   instance C Int where
m _ = 1
   
   -- general recipe:
   -- encode type functions T a via type relations T a b
   -- replace T a via fresh b under the constraint C a b
  

My AT encoding won't work with ghc/hugs because the class 
declaration of C demands that the output type b is univeral. 
Though, in the declaration instance C Int we return an Int.
Hence, the above cannot be translated to System F.
Things would be different if we'd translate to an untyped back-end.

  referring to the associated type seems slightly awkward 
  in these encodings, so the special syntax for ATS would 
  still be useful, but I agree that an encoding into FDs should
  be possible.
   
   The FD program won't type check under ghc but this
   doesn't mean that it's not a legal FD program.
  
  glad to hear you say that. but is there a consistent version
  of FDs that allows these things - and if not, is that for lack
  of trying or because it is known not to work?
  

The FD framework in Understanding FDs via CHRs clearly subsumes
ATs (based on my brute-force encoding). My previous email showed
that type inference for FDs and ATs can be equally tricky.
Though, why ATs? Well, ATs are still
*very useful* because they help to structure programs (they avoid
redundant parameters). On the other hand, FDs provide the
user with the convenience of 'automatic' improvement.

Let's do a little test. Who can translate the
following FD program to AT?

zip2 :: [a]-[b]-[(a,b)]
zip2 (a:as) (b:bs) = (a,b) : (zip2 as bs)
zip2 _  _  = []

class Zip a b c |  c - a, c - b where 
  mzip :: [a] - [b] - c
instance Zip a b [(a,b)] where 
  mzip = zip2
instance Zip (a,b) c e = Zip a b ([c]-e) where
  mzip as bs cs = mzip (zip2 as bs) cs


Martin

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] (Un)termination of overloading resolution

2006-02-21 Thread Martin Sulzmann

This is not a counter-example to my conjecture.
Let's consider the general case (which I didn't
describe in my earlier email).

In case we have an n-ary type function T
(or (n+1)-ary type class constraint T)
the conditions says
for each 

type T t1 ... tn = t

(or rule T t1 ... tn x == t)

then rank(ti)  rank(t) for each i=1,..,n

Your example violates this condition

   class E m a b | m a - b
   instance E m (() - ()) (m ())

The improvement rule says:

rule E m (() - ()) x == x=(m ())

but rank m  rank (m ())

Your example shows that the condition

rank(t1)+...+rank(tn)  rank(t)

is not sufficient (but that's not a surprise).

Program text
  test = foo (\f - (f ()) :: ()) (\f - (f ()) :: ())
gives rise to

  Foo ((-) (() - ())) ((() - ()) - ())

via
  instance (E m a b, Foo m b) = Foo m (a-()) where

this constraint reduces to

  E ((-) (() - ())) (()-()) x
  Foo ((-) (() - ())) x

the above improvement yields x = (((-) (() - ( ()

this leads to

Foo ((-) (() - ())) -) (() - ( ())

and so on (the second component is increasing).

So, I'll stick to my claim. I don't think I have time
at the moment to work out the details of my claim/proof sketch.
But if somebody is interested. The following is a good
reference how to attack the problem:
@inproceedings{thom-term,
author = T. Fr{\u}hwirth,
title = Proving Termination of Constraint Solver Programs,
booktitle = Proc.\ of New Trends in Constraints: Joint 
{ERCIM/Compulog} Net
Workshop,
volume = 1865,
series = LNAI,
publisher = Springer-Verlag,
year = 2000
}


Martin


[EMAIL PROTECTED] writes:
  
  Martin Sulzmann wrote:
  
   - The type functions are obviously terminating, e.g.
 type T [a] = [[a]] clearly terminates.
   - It's the devious interaction between instances/superclasss
 and type function which causes the type class program
 not to terminate.
  
   Is there a possible fix? Here's a guess.
   For each type definition in the AT case
  
   type T t1 = t2
  
   (or improvement rule in the FD case
  
   rule T1 t1 a == a=t2
  
   we demand that the number of constructors in t2
   is strictly smaller than the in t1
   (plus some of the other usual definitions).
  
  I'm afraid that may still be insufficient, as the following
  counter-example shows. It causes GHC 6.4.1 to loop in the typechecking
  phase. I haven't checked the latest GHC. The example corresponds to a
  type function (realized as a class E with functional dependencies) in
  the context of an instance. The function in question is
  
   class E m a b | m a - b
   instance E m (() - ()) (m ())
  
  We see that the result of the function, m () is smaller (in the
  number of constructors) that the functions' arguments, m and
  () - () together. Plus any type variable free in the result is also
  free in at least one of the arguments. And yet it loops.
  
  
  
  {-# OPTIONS -fglasgow-exts #-}
  -- Note the absence of the flag -fallow-undecidable-instances
  
  module F where
  
  class Foo m a where
  foo :: m b - a - Bool
  
  instance Foo m () where
  foo _ _ = True
  
  instance (E m a b, Foo m b) = Foo m (a-()) where
  foo m f = undefined
  
  class E m a b | m a - b where
  tr :: m c - a - b
  
  -- There is only one instance of the class with functional dependencies
  instance E m (() - ()) (m ()) where
  tr x = undefined
  
  -- GHC(i) loops
  
  test = foo (\f - (f ()) :: ()) (\f - (f ()) :: ())
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] (Un)termination of overloading resolution

2006-02-21 Thread Martin Sulzmann
[EMAIL PROTECTED] writes:
  
   Let's consider the general case (which I didn't describe in my earlier
   email).
  
   In case we have an n-ary type function T (or (n+1)-ary type class
   constraint T) the conditions says for each
  
   type T t1 ... tn = t
  
   (or rule T t1 ... tn x == t)
  
   then rank(ti)  rank(t) for each i=1,..,n
  
  I didn't know what condition you meant for the general form. But the
  condition above is not sufficient either, as a trivial modification of the
  example shows. The only modification is
  
   instance E ((-) (m ())) (() - ()) (m ()) where
  
  and
   test = foo (undefined::((() - ()) - ()) - ()) (\f - (f ()) :: ())
  
  Now we have t1 = ((-) (m ()))   : two constructors, one variable
   t2 = () - (): three constructors
   t  = m (): one constructor, one variable
  
  and yet GHC 6.4.1 loops in the typechecking phase as before.


rank (() -())  rank (m ())  does NOT hold.

Sorry, I left out the precise definition of the rank function
in my previous email. Here's the formal definition.

rank(x) is some positive number for variable x

rank(F t1 ... tn) = 1 + rank t1 + ... + rank tn

where F is an n-ary type constructor.

rank (f t) = rank f + rank t

f is a functor variable

Hence, rank (()-()) = 3

rank (m ()) = rank m + 1

We cannot verify that 3  rank m + 1.

So, I still claim my conjecture is correct.

Martin

P. S.

Oleg, can you next time please provide more details
why type inference does not terminate. This will help
others to follow our discussion.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] (Un)termination of overloading resolution

2006-02-21 Thread Martin Sulzmann
[EMAIL PROTECTED] writes:
  
  Martin Sulzmann wrote:
  
   Let's consider the general case (which I didn't describe in my earlier
   email).
   In case we have an n-ary type function T (or (n+1)-ary type class
   constraint T) the conditions says for each
  
   type T t1 ... tn = t
  
   (or rule T t1 ... tn x == t)
  
   then rank(ti)  rank(t) for each i=1,..,n
  
  ...
  
   Sorry, I left out the precise definition of the rank function
   in my previous email. Here's the formal definition.
  
   rank(x) is some positive number for variable x
  
   rank(F t1 ... tn) = 1 + rank t1 + ... + rank tn
  
   where F is an n-ary type constructor.
  
   rank (f t) = rank f + rank t
  
   f is a functor variable
  
  Yes, I was wondering what rank means exactly. But now I do
  have a problem with the criterion itself. The following simple and
  quite common code 
  
   newtype MyIOState a = MyIOState (Int - IO (a,Int))
  
   instance Monad MyIOState where
   return x = MyIOState (\s - return (x,s))
  
   instance MonadState Int MyIOState where
   put x = MyIOState (\s - return ((),x))
  
  
  becomes illegal then? Indeed, the class |MonadState s m| has a
  functional dependency |m - s|. In our case, 
   m = MyIOState, rank MyIOState = 1
   s = Intrank Int = 1
  and so rank(m)  rank(s) is violated, right?
  
  

The additional conditions I propose are only necesssary
once we break the Bound Variable Condition.

Recall: The Bound Variable Condition (BV Condition) says:
for each instance C = TC ts we have that
fv(C) subsetof fv(ts)
(the same applies to (super)class declarations which I leave out
here).

The above MonadState instance does NOT break the BV Condition. 
Hence, everything's fine here, the FD-CHR results guarantee that
type inference is sound, complete and decidable.

Though, your earlier example breaks the BV Condition.

  class Foo m a where
  foo :: m b - a - Bool
  
  instance Foo m () where
  foo _ _ = True
  
  instance (E m a b, Foo m b) = Foo m (a-()) where
  foo m f = undefined
  
  class E m a b | m a - b where
  tr :: m c - a - b
  instance E m (() - ()) (m ())

In the second instance, variable b appears only in the context
but not in the instance head. But variable b is captured
by the constraint E m a b where m and a appear in the
instance head and we have that class E m a b | m a - b.
We say that this instance satisfies the Weak Coverage Condition.

The problem is that Weak Coverage does not guarantee termination.
See this and the earlier examples we have discussed so far.

To obtain termination, I propose to impose stronger conditions
on improvement rules (see above). My guess is that thus
we obtain termination. If we can guarantee termination, we know
that Weak Coverage guarantees confluence. Hence, we can restore
sound, complete and decidable type inference.


  BTW, the above definition of the rank is still incomplete: it doesn't say
  what rank(F t1 ... tm) is where F is an n-ary type constructor and 
  m  n. Hopefully, the rank of an incomplete type application is bounded
  (otherwise, I have a non-termination example in mind). If the rank is
  bounded, then the problem with defining an instance of MonadState
  persists. For example, I may wish for a more complex state (which is
  realistic):
  
   newtype MyIOState a = MyIOState (Int - IO (a,(Int,String,Bool)))
   instance MonadState (Int,String,Bool) MyIOState 
  
  Now, the rank of the state is 4...
  

The simple solution might be

for any n-ary type constructor F

rank(F t1 ... tm) = 1 + rank t1 + ... + rank tm where m=n

This might be too naive, I don't know. I haven't thought about the
case where we need to compute the rank of a type constructor.
Though, the style of termination proof I'm using dates back to Prolog which
we know is untyped. Hence, there might not be any problem after all?

Martin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Associated Type Synonyms question

2006-02-20 Thread Martin Sulzmann
Stefan Wehr writes:
  Martin Sulzmann [EMAIL PROTECTED] wrote::
  
   Stefan Wehr writes:
[...]
Manuel (Chakravarty) and I agree that it should be possible to
constrain associated type synonyms in the context of class
definitions. Your example shows that this feature is actually
needed. I will integrate it into phrac within the next few days.

  
   By possible you mean this extension won't break any
   of the existing ATS inference results?
  
  Yes, although we didn't go through all the proofs.
  
   You have to be very careful otherwise you'll loose decidability.
  
  Do you have something concrete in mind or is this a more general
  advice?
  

I'm afraid, I think there's a real issue.
Here's the AT version of Example 15 from Understanding FDs via CHRs

  class D a
  class F a where
   type T a
  instance F [a] where
   type T [a] = [[a]]   
  instance (D (T a), F a) = D [a]
^^^
type function appears in type class

Type inference (i.e. constraint solving) for D [a] will not terminate.
Roughly,

  D [[a]]
--_instance  D (T [a]), F [a])
--_type function D [[a]], F [a]
and so on

Will this also happen if type functions appear in superclasses?
Let's see. Consider

 class C a
 class F a where
   type T a
 instance F [a] where
   type T [a] = [[[a]]]
 class C (T a) = D a
 ^
type function appears in superclass context
 instance D [a] = C [[a]] -- satisfies Ross Paterson's Termination Conditions

Consider

  D [a]
--_superclassC (T [a]), D [a]
--_type function C [[[a]]], D [a]
--_instance  D [[a]], D [a]
and so on


My point:

- The type functions are obviously terminating, e.g.
  type T [a] = [[a]] clearly terminates.
- It's the devious interaction between instances/superclasss
  and type function which causes the type class program
  not to terminate.


Is there a possible fix? Here's a guess.

For each type definition in the AT case

type T t1 = t2

(or improvement rule in the FD case

rule T1 t1 a == a=t2

BTW, any sufficient restriction which applies to the FD
case can be lifted to the AT case and vice versa!)

we demand that the number of constructors in t2
is strictly smaller than the in t1
(plus some of the other usual definitions).
Then,

type T [a] = [[a]]

although terminating, is not legal anymore.

Then, there might be some hope to recover termination
(I've briefly sketched a proof and termination may
indeed hold but I'm not 100% convinced yet).

Martin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Associated Type Synonyms question

2006-02-16 Thread Martin Sulzmann
Stefan Wehr writes:
  Niklas Broberg [EMAIL PROTECTED] wrote::
  
   On 2/10/06, Ross Paterson [EMAIL PROTECTED] wrote:
   On Fri, Feb 10, 2006 at 05:20:47PM +0100, Niklas Broberg wrote:
- when looking at the definition of MonadWriter the Monoid constraint
is not strictly necessary, and none of the other mtl monads have
anything similar. Is it the assumption that this kind of constraint is
never really necessary, and thus no support for it is needed for ATS?
  
   I think that's right -- it's only needed for the Monad instance for
   WriterT.  But it is a convenience.  In any instance of MonadWriter, the
   w will be a monoid, as there'll be a WriterT in the stack somewhere,
   so the Monoid constraint just saves people writing general functions
   with MonadWriter from having to add it.
  
   Sure it's a convenience, and thinking about it some more it would seem
   like that is always the case - it is never crucial to constrain a
   parameter. But then again, the same applies to the Monad m constraint,
   we could give the same argument there (all actual instances will be
   monads, hence...). So my other question still stands, why not allow
   constraints on associated types as well, as a convenience?
  
  Manuel (Chakravarty) and I agree that it should be possible to
  constrain associated type synonyms in the context of class
  definitions. Your example shows that this feature is actually
  needed. I will integrate it into phrac within the next few days.
  

By possible you mean this extension won't break any
of the existing ATS inference results?
You have to be very careful otherwise you'll loose decidability.

Something more controversial.
Why ATS at all? Why not encode them via FDs?

Example

-- ATS
class C a where
  type T a
  m :: a-T a
instance C Int where
  type T Int = Int
  m _ = 1

-- FD encoding
class T a b | a-b 
instance T Int Int

class C a where
  m :: T a b = a-b

instance C Int where
  m _ = 1

-- general recipe:
-- encode type functions T a via type relations T a b
-- replace T a via fresh b under the constraint C a b


The FD program won't type check under ghc but this
doesn't mean that it's not a legal FD program.
It's wrong to derive certain conclusions
about a language feature by observing the behavior
of a particular implementation!

Martin


  
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Regular Expressions, was Re: Interest in helping w/ Haskell standard

2005-10-18 Thread Martin Sulzmann

Semantic subtyping issue:

Assume we have a function f of type f :: Reg (r*) - ...
to which we pass a value x of type Reg (r,r*).
We have that (r,r*) is a semantic subtype of r*,
hence, the code f x is accepted in languages such as XDuce/CDuce.

I'm just saying that the fact regexp can be represented via GADTs
doesn't imply that we get the same expressive power of
languages such as XDuce/CDuce.

Martin


Hongwei Xi writes:
  Hi Martin:
  
  Thanks for the message.
  
  No, I am not on the list.
  
  Back then, we did something similar but for a different
  purpose. The idea is like this:
  
  We wanted to represent an XML document as a pair:
  
  XML (r) * Reg (r)
  
  where r stands for some regexp and Reg(r) is a
  singleton type. When performing search over XML(r),
  we can use Reg(r) as some kind of pilot value
  (which is a lot smaller than XML(r)) to guide the
  search. For instance, it is often easy to tell from
  Reg(r) that an item to be found is not in XML(r) and
  thus we can skip searching XML(r) entirely.
  This is sort of like indexing scheme in database.
  
  BTW, I am not sure what kind of 'semantic typing'
  you have in mind. An example?
  
  --Hongwei
  
  Computer Science Department
  Boston University
  111 Cummington Street
  Boston, MA 02215
  
  Email: [EMAIL PROTECTED]
  Url: http://www.cs.bu.edu/~hwxi
  Tel: +1 617 358 2511 (office)
  Fax: +1 617 353 6457 (department)
  
  
  
  On Mon, 17 Oct 2005, Martin Sulzmann wrote:
  
  
  Very interesting Conor. Do you know Xi et al's APLAS'03 paper?
  (Hongwei, I'm not sure whether you're on this list).
  Xi et al. use GRDTs (aka GADTs aka first-class phantom types)
  to represent XML documents. There're may be some connections
  between what you're doing and Xi et al's work.
  
  I believe that there's an awful lot you can  do with GADTs 
  (in the context of regular expressions). But there're two things 
  you can't accomplish:
  semantic subtyping and regular expression pattern matching
  a la XDuce/CDuce. Unless somebody out there proves me wrong.
  
  Martin
  
  
  
  Hi folks,
  
  Inspired by Ralf's post, I thought I'd just GADTize a dependently typed=20
  program I wrote in 2001.
  
  Wolfgang Jeltsch wrote:
  
   Now lets consider using an algebraic datatype for regexps:
  
data RegExp
=3D Empty | Single Char | RegExp :+: RegExp | RegExp :|: 
   RegExpt | Ite=
  r RegExp
  
  Manipulating regular expressions now becomes easy and safe =E2=80=93 you=
   are just not=20
  able to create syntactically incorrect regular expressions since durin=
  g=20
  runtime you don't deal with syntax at all.
   =20
  
  
  A fancier variation on the same theme...
  
data RegExp :: * - * - * where
  Zero   :: RegExp tok Empty
  One:: RegExp tok ()
  Check  :: (tok - Bool) - RegExp tok tok
  Plus   :: RegExp tok a - RegExp tok b - RegExp tok (Either a b)
  Mult   :: RegExp tok a - RegExp tok b - RegExp tok (a, b)
  Star   :: RegExp tok a - RegExp tok [a]
  
data Empty
  
  The intuition is that a RegExp tok output is a regular expression=20
  explaining how to parse a list of tok as an output. Here, Zero is the=20
  regexp which does not accept anything, One accepts just the empty=20
  string, Plus is choice and Mult is sequential composition; Check lets=20
  you decide whether you like a single token.
  
  Regular expressions may be seen as an extended language of polynomials=20
  with tokens for variables; this parser works by repeated application of=20
  the remainder theorem.
  
parse :: RegExp tok x - [tok] - Maybe x
parse r []   =3D empty r
parse r (t : ts) =3D case divide t r of
  Div q f - return f `ap` parse q ts
  
  Example
  
  *RegExp parse (Star (Mult (Star (Check (=3D=3D 'a'))) (Star (Check 
  (=3D=3D=
  =20
  'b') abaabaaa
  Just [(a,b),(aa,b),(aaa,)]
  
  The 'remainder' explains if a regular expression accepts the empty=20
  string, and if so, how. The Star case is a convenient=20
  underapproximation, ruling out repeated empty values.
  =20
empty :: RegExp tok a - Maybe a
empty Zero =3D mzero
empty One  =3D return ()
empty (Check _)=3D mzero
empty (Plus r1 r2) =3D (return Left  `ap` empty r1) `mplus`
 (return Right `ap` empty r2)
empty (Mult r1 r2) =3D return (,) `ap` empty r1 `ap` empty r2
empty (Star _) =3D return []
  
  The 'quotient' explains how to parse the tail of the list, and how to=20
  recover the meaning of the whole list from the meaning of the tail.
  
data Division tok x =3D forall y. Div (RegExp tok y) (y - x)
  
  Here's how it's done. I didn't expect to need scoped type variables, but=20
  I did...
  
divide :: tok - RegExp tok x - Division tok x
divide t Zero  =3D Div Zero naughtE
divide t One   =3D Div Zero naughtE
divide t (Check p) | p t   =3D Div One (const t

[Haskell-cafe] Regular Expressions, was Re: Interest in helping w/ Haskell standard

2005-10-17 Thread Martin Sulzmann

Very interesting Conor. Do you know Xi et al's APLAS'03 paper?
(Hongwei, I'm not sure whether you're on this list).
Xi et al. use GRDTs (aka GADTs aka first-class phantom types)
to represent XML documents. There're may be some connections
between what you're doing and Xi et al's work.

I believe that there's an awful lot you can  do with GADTs 
(in the context of regular expressions). But there're two things 
you can't accomplish:
semantic subtyping and regular expression pattern matching
a la XDuce/CDuce. Unless somebody out there proves me wrong.

Martin



Hi folks,

Inspired by Ralf's post, I thought I'd just GADTize a dependently typed=20
program I wrote in 2001.

Wolfgang Jeltsch wrote:

 Now lets consider using an algebraic datatype for regexps:

   data RegExp
   =3D Empty | Single Char | RegExp :+: RegExp | RegExp :|: 
 RegExpt | Ite=
r RegExp

Manipulating regular expressions now becomes easy and safe =E2=80=93 you=
 are just not=20
able to create syntactically incorrect regular expressions since durin=
g=20
runtime you don't deal with syntax at all.
 =20


A fancier variation on the same theme...

  data RegExp :: * - * - * where
Zero   :: RegExp tok Empty
One:: RegExp tok ()
Check  :: (tok - Bool) - RegExp tok tok
Plus   :: RegExp tok a - RegExp tok b - RegExp tok (Either a b)
Mult   :: RegExp tok a - RegExp tok b - RegExp tok (a, b)
Star   :: RegExp tok a - RegExp tok [a]

  data Empty

The intuition is that a RegExp tok output is a regular expression=20
explaining how to parse a list of tok as an output. Here, Zero is the=20
regexp which does not accept anything, One accepts just the empty=20
string, Plus is choice and Mult is sequential composition; Check lets=20
you decide whether you like a single token.

Regular expressions may be seen as an extended language of polynomials=20
with tokens for variables; this parser works by repeated application of=20
the remainder theorem.

  parse :: RegExp tok x - [tok] - Maybe x
  parse r []   =3D empty r
  parse r (t : ts) =3D case divide t r of
Div q f - return f `ap` parse q ts

Example

*RegExp parse (Star (Mult (Star (Check (=3D=3D 'a'))) (Star (Check (=3D=3D=
=20
'b') abaabaaa
Just [(a,b),(aa,b),(aaa,)]

The 'remainder' explains if a regular expression accepts the empty=20
string, and if so, how. The Star case is a convenient=20
underapproximation, ruling out repeated empty values.
=20
  empty :: RegExp tok a - Maybe a
  empty Zero =3D mzero
  empty One  =3D return ()
  empty (Check _)=3D mzero
  empty (Plus r1 r2) =3D (return Left  `ap` empty r1) `mplus`
   (return Right `ap` empty r2)
  empty (Mult r1 r2) =3D return (,) `ap` empty r1 `ap` empty r2
  empty (Star _) =3D return []

The 'quotient' explains how to parse the tail of the list, and how to=20
recover the meaning of the whole list from the meaning of the tail.

  data Division tok x =3D forall y. Div (RegExp tok y) (y - x)

Here's how it's done. I didn't expect to need scoped type variables, but=20
I did...

  divide :: tok - RegExp tok x - Division tok x
  divide t Zero  =3D Div Zero naughtE
  divide t One   =3D Div Zero naughtE
  divide t (Check p) | p t   =3D Div One (const t)
 | otherwise =3D Div Zero naughtE
  divide t (Plus (r1 :: RegExp tok a) (r2 :: RegExp tok b)) =3D
case (divide t r1, divide t r2) of
  (Div (q1 :: RegExp tok a') (f1 :: a' - a),
Div (q2 :: RegExp tok b') (f2 :: b' - b)) -
Div (Plus q1 q2) (f1 +++ f2)
  divide t (Mult r1 r2) =3D case (empty r1, divide t r1, divide t r2) of
(Nothing, Div q1 f1, _) - Div (Mult q1 r2) (f1 *** id)
(Just x1, Div q1 f1, Div q2 f2) -
  Div (Plus (Mult q1 r2) q2) (either (f1 *** id) (((,) x1) . f2))
  divide t (Star r) =3D case (divide t r) of
Div q f - Div (Mult q (Star r)) (\ (y, xs) - (f y : xs))

Bureaucracy.

  (***) :: (a - b) - (c - d) - (a, c) - (b, d)
  (f *** g) (a, c) =3D (f a, g c)

  (+++) :: (a - b) - (c - d) - Either a c - Either b d
  (f +++ g) (Left a)  =3D Left  (f a)
  (f +++ g) (Right c) =3D Right (g c)

  naughtE :: Empty - x
  naughtE =3D undefined

It's not the most efficient parser in the world (doing some algebraic=20
simplification on the fly wouldn't hurt), but it shows the sort of stuff=20
you can do.

Have fun

Conor
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: [Haskell-cafe] Overlapping Instances with Functional Dependencies

2005-07-11 Thread Martin Sulzmann

This is still an ad-hoc solution, cause you lose
the `most-specific' instance property. You really have to
impose a `fixed' ordering in which instance-improvement rules
fire.

Recap: 

The combination of overlapping instances
and type improvement leads to a `non-confluent' system, i.e.
there're too many (inconsistent) choices how to improve and reduce
constraints.

The standard approach to deal with overlapping instances is to
impose a fixed order among the resulting reduction rules
(the `most-specific' order can be seen as a special instance
of a fixed order).

FDs imply improvement rules. In case of overlapping instances these
improvement rules are immediately non-confluent.
As Simon pointed out:
...what ever mechanism is used for instance matching, the same
would be used for type dependencies...
Hence, combining instances and improvement rules is the obvious
`solution'. Hints can be found in my first two replies where I said:
1) ... You find some hints how to achieve this in ... ESOP'04.
2) ...instances and type dependencies are closer linked to each other
  then one might think...
Concretely, the TypeCast trick already appears in the ESOP'04 paper
on p8 (mid-page). 

Conclusion:

I think it's wrong to explain a new feature in terms of an
implementation-specific encoding. We need something more principled
here. Otherwise, we'll face some unexpected behavior (eventually)
again.


Martin



[EMAIL PROTECTED] writes:
  
  Daniel Brown wrote:
  
  class Baz a b | a - b
  instance Baz (a - b) (a - [b])
  instance Baz a a
   ...but Baz fails with this error...
  
   When confronted with overlapping instances, the compiler chooses the
   most specific one (if it is unique), e.g. `Baz (a - b) (a - [b])` is
   more specific than `Baz a a`.
  
   But it seems that the combination of the two features is broken: if the
   most specific instance is chosen before checking the functional
   dependency, then the fundep is satisfied; if the fundep is checked
   before choosing the most specific instance, then it isn't.
  
  There is a way to write your example in Haskell as it is. The key idea
  is that functional dependencies can be given *per instance* rather than
  per class. To assert such dependencies, you need the `TypeCast'
  constraint, which is throughly discussed in the HList technical
  report. 
   http://homepages.cwi.nl/~ralf/HList/
  
  The following is the complete code for the example, which runs on GHC
  6.4. We see that the functional dependencies work indeed: the compiler
  figures out the types of test1 and test2 and test3 (and thus resolved
  overloading) without any type signatures or other intervention on our
  part.
  
  
  {-# OPTIONS -fglasgow-exts #-}
  {-# OPTIONS -fallow-undecidable-instances #-}
  {-# OPTIONS -fallow-overlapping-instances #-}
  
  module Foo where
  
  
  {-
  class Baz a b | a - b
  instance Baz (a - b) (a - [b])
  instance Baz a a
  -}
  
  -- No functional dependencies here!
  class Baz a b where baz :: a - b
  
  -- Rather, dependencies are here
  instance TypeCast a r = Baz a r where
  baz a = typeCast a
  
  instance TypeCast (a - [b]) r = Baz (a - b) r where
  baz f = let r = \a - [f a] in typeCast r
  
  -- Chooses the instance Baz a a
  test1 = baz True
  -- True
  
  -- Chooses the instance Baz (a - b) (a - [b])
  test2 = (baz show) (1::Int)
  -- [1]
  
  test3 x = (baz show) x
  test3' = test3 (Just True)
  -- [Just True]
  
  -- copied verbatim from the HList library
  class TypeCast   a b   | a - b, b-a   where typeCast   :: a - b
  class TypeCast'  t a b | t a - b, t b - a where typeCast'  :: t-a-b
  class TypeCast'' t a b | t a - b, t b - a where typeCast'' :: t-a-b
  instance TypeCast'  () a b = TypeCast a b where typeCast x = typeCast' () x
  instance TypeCast'' t a b = TypeCast' t a b where typeCast' = typeCast''
  instance TypeCast'' () a a where typeCast'' _ x  = x
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Overlapping Instances with Functional Dependencies

2005-07-08 Thread Martin Sulzmann
Hi,

I wouldn't call this a bug, overlapping instances 
and in particular the combination with functional dependencies
are something which is not well studied yet.
Hence, GHC is very conservative here.

I feel like you, this program should work.
As you correctly point out, there's a conflict among the
two improvement rules (resulting from the instances and FD).
A sensible decision is to apply the same ad-hoc
mechanism to improvement rules that is currently
applied to overlapping instances. Of course, we need some
formal system to express such conditions precisely.
You find some hints how to achieve this in

G. J. Duck, S. Peyton-Jones, P. J. Stuckey, and M. Sulzmann. 
Sound and decidable type inference for functional dependencies. 
In Proc. of ESOP'04

Martin


Daniel Brown writes:
  I have the following three programs:
  
 class Foo a b
 instance Foo (a - b) (a - [b])
 instance Foo a a
  
 class Bar a b | a - b
 instance Bar (a - b) (a - b)
 instance Bar a a
  
 class Baz a b | a - b
 instance Baz (a - b) (a - [b])
 instance Baz a a
  
  When compiled in ghc 6.4 (with -fglasgow-exts
  -fallow-overlapping-instances -fallow-undecidable-instances) Foo
  and Bar compile fine, but Baz fails with this error:
  
 Baz.hs:2:0:
 Functional dependencies conflict between instance declarations:
   Baz.hs:2:0: instance Baz (a - b) (a - [b])
   Baz.hs:3:0: instance Baz a a
  
  This is how I interpret the error: The fundep says a uniquely 
  determines b, but if you have `Baz (Int - Int) b`, b is `Int - [Int]` 
  according to the first instance and `Int - Int` according to the second 
  instance. b isn't uniquely determined by a, so the functional dependency 
  isn't functional -- thus the conflict.
  
  When confronted with overlapping instances, the compiler chooses the 
  most specific one (if it is unique), e.g. `Baz (a - b) (a - [b])` is 
  more specific than `Baz a a`.
  
  But it seems that the combination of the two features is broken: if the 
  most specific instance is chosen before checking the functional 
  dependency, then the fundep is satisfied; if the fundep is checked 
  before choosing the most specific instance, then it isn't.
  
  Is this a bug, or am I confused?
  
Dan
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: [Haskell-cafe] Overlapping Instances with Functional Dependencies

2005-07-08 Thread Martin Sulzmann

Simon's dead right, too :) The issue raised here is of general nature and
doesn't depend on the particular (syntactic) formalism used to specify
type dependencies (let it be FDs, ATs,...). The consequence is that
instances and type dependencies are closer linked to each other
then one might think (in case of instance/improvement overlap at least).

Martin


Simon Peyton-Jones writes:
  Martin's dead right.  GHC uses a less sophisticated mechanism to do
  matching when it's thinking about functional dependencies than when it's
  doing straight instance matching.  Maybe something cleverer for fundeps
  would make sense, as you point out.  I hadn't thought of that before;
  it's a good point.
  
  Nowadays, whenever a fundep question comes up I think how would it show
  up if we had associated type synonyms instead of fundeps? (see the
  paper on my home page).  In this case I think the answer is GHC's
  current instance-matching mechanism would work unchanged; or to put it
  another way, what ever mechanism is used for instance matching, the same
  would be used for type dependencies.
  
  Simon

  | I wouldn't call this a bug, overlapping instances
  | and in particular the combination with functional dependencies
  | are something which is not well studied yet.
  | Hence, GHC is very conservative here.
  | 
  | I feel like you, this program should work.
  | As you correctly point out, there's a conflict among the
  | two improvement rules (resulting from the instances and FD).
  | A sensible decision is to apply the same ad-hoc
  | mechanism to improvement rules that is currently
  | applied to overlapping instances. Of course, we need some
  | formal system to express such conditions precisely.
  | You find some hints how to achieve this in
  | 
  | G. J. Duck, S. Peyton-Jones, P. J. Stuckey, and M. Sulzmann.
  | Sound and decidable type inference for functional dependencies.
  | In Proc. of ESOP'04
  | 
  | Martin
  | 
  | 
  | Daniel Brown writes:
  |   I have the following three programs:
  |  
  |  class Foo a b
  |  instance Foo (a - b) (a - [b])
  |  instance Foo a a
  |  
  |  class Bar a b | a - b
  |  instance Bar (a - b) (a - b)
  |  instance Bar a a
  |  
  |  class Baz a b | a - b
  |  instance Baz (a - b) (a - [b])
  |  instance Baz a a
  |  
  |   When compiled in ghc 6.4 (with -fglasgow-exts
  |   -fallow-overlapping-instances -fallow-undecidable-instances) Foo
  |   and Bar compile fine, but Baz fails with this error:
  |  
  |  Baz.hs:2:0:
  |  Functional dependencies conflict between instance
  declarations:
  |Baz.hs:2:0: instance Baz (a - b) (a - [b])
  |Baz.hs:3:0: instance Baz a a
  |  
  |   This is how I interpret the error: The fundep says a uniquely
  |   determines b, but if you have `Baz (Int - Int) b`, b is `Int -
  [Int]`
  |   according to the first instance and `Int - Int` according to the
  second
  |   instance. b isn't uniquely determined by a, so the functional
  dependency
  |   isn't functional -- thus the conflict.
  |  
  |   When confronted with overlapping instances, the compiler chooses
  the
  |   most specific one (if it is unique), e.g. `Baz (a - b) (a - [b])`
  is
  |   more specific than `Baz a a`.
  |  
  |   But it seems that the combination of the two features is broken: if
  the
  |   most specific instance is chosen before checking the functional
  |   dependency, then the fundep is satisfied; if the fundep is checked
  |   before choosing the most specific instance, then it isn't.
  |  
  |   Is this a bug, or am I confused?
  |  
  | Dan
  |   ___
  |   Haskell-Cafe mailing list
  |   Haskell-Cafe@haskell.org
  |   http://www.haskell.org/mailman/listinfo/haskell-cafe
  | ___
  | Haskell-Cafe mailing list
  | Haskell-Cafe@haskell.org
  | http://www.haskell.org/mailman/listinfo/haskell-cafe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Typed Lambda-Expressions withOUT GADTs

2005-01-08 Thread Martin Sulzmann
Hi,

So far, we've seen GADTs encodings based on
the ideo of turning each (value) pattern clause 
into an (type class) instance declaration.

There's also another encoding based on the idea
of providing (value) evidence for type equality.
(Explored by example by quite a few people,
see upcoming paper for references).

A formal result can be found here:
A Systematic Translation of Guarded Recursive Data Types to
Existential Types
by Martin Sulzmann and Meng Wang
http://www.comp.nus.edu.sg/~sulzmann/

Example encodings for some of the
examples we've seen so far can be found below.

Martin

- Examples encodings ---

{-# OPTIONS -fglasgow-exts #-}

data E a b = E (a-b,b-a)
{-
data Sym env r 
= forall xx.(env = (r,xx)) = FirstSym
| forall env' a.(env=(a,env')) = NextSym (Sym env' r)
-}

data Sym env r 
= forall xx. FirstSym (E env (r,xx))
| forall env' a. NextSym (Sym env' r) (E env (a,env'))
{-
runSym :: Sym env r - env - r
runSym FirstSym (r,xx) = r
runSym (NextSym var) (a,env) = runSym var env
-}
-- have to play the pair trick because pattern doesn't match
runSym :: Sym env r - env - r
runSym (FirstSym (E (g,h))) pair {-(r,xx)-} = (fst (g pair))
runSym (NextSym var (E (g,h))) pair {-(a,env)-} = runSym var (snd (g
pair))

{-
data OpenExpression env r
= forall a r'.(r=(a-r')) = Lambda (OpenExpression (a',env) r')
| forall r'. Symbol (Sym env r)
| Constant r
| forall a. Lambda (OpenExpression env (a - r))
   (OpenExpression env a)
-}

data OpenExpression env r
= forall a r'. Lambda (OpenExpression (a,env) r') (E r (a-r'))
| Symbol (Sym env r)
| Constant r
| forall a. Application (OpenExpression env (a - r))
(OpenExpression env a)
{-
runOpenExpression :: OpenExpression env a - env - a
runOpenExpression (Constant r) _ = r
runOpenExpression (Application ar a) env = (runOpenExpression ar env)
(runOpenExpression a env)
runOpenExpression (Lambda exp) env = \a - runOpenExpression exp
(a,env)
runOpenExpression (Symbol var) env = runSym var env
-}

runOpenExpression :: OpenExpression env a - env - a
runOpenExpression (Constant r) _ = r
runOpenExpression (Application ar a) env = (runOpenExpression ar env)
(runOpenExpression a env)
runOpenExpression (Lambda exp (E (g,h))) env = h (\a -
runOpenExpression exp (a,env))
runOpenExpression (Symbol var) env = runSym var env

newtype Expression r = MkExpression (forall env. OpenExpression env r)

runExpression :: Expression r - r
runExpression (MkExpression expr) = runOpenExpression expr () 

eid = E (id,id)
sym0 :: OpenExpression (a,env) a
sym0 = Symbol (FirstSym eid)

sym1 :: OpenExpression (a,(b,env)) b
sym1 = Symbol (NextSym (FirstSym eid) eid)

sym2 :: OpenExpression (a,(b,(c,env))) c
sym2 = Symbol (NextSym (NextSym (FirstSym eid) eid) eid)


k :: Expression (a - b - a)
-- k = \x.\y - x = \.\.1
k = MkExpression (Lambda (Lambda sym1 eid) eid)

s :: Expression ((a - b - c) - (a - b) - a - c)
-- s = \x.\y.\z - x z (y z) = \.\.\.2 0 (1 0)
s = MkExpression (Lambda (Lambda (Lambda (Application (Application
sym2 sym0) (Application sym1 sym0)) eid) eid) eid)


test3 = (runExpression s) (runExpression k) (runExpression k)
   {-# OPTIONS
-fglasgow-exts #-}
import Control.Monad.State (MonadState (..))
data E a b = E (a-b,b-a)

data State s a
  = forall b. Bind (State s a) (a - State s b) (E a b)
  | Return a 
  | Get (E a s)
  | Put s (E a ()) 

runState :: State s a - s - (s,a)
runState (Return a) s = (s,a)
runState (Get (E (g,h))) s = (s,(h s))
runState (Put s (E (g,h))) _ = (s,(h ()))
runState (Bind (Return a) k (E (g,h))) s 
  = let --cast :: State s b - State s a
cast (Return a) = Return (h a)
cast (Get (E (g',h'))) =  Get (E (\x- g' (g x),\x-h (h' x))) 

cast (Put s (E (g',h'))) = Put s (E (\x-g' (g x),\x-h (h'
x)))
cast (Bind s' k' (E (g',h'))) = Bind (cast s') (\x-k' (g x))
(E (\x-g' (g x),\x-h (h' x)))
in runState (cast (k a)) s
runState (Bind (Get (E (g1,h1))) k (E (g,h))) s 
  = let cast (Return a) = Return (h a)
cast (Get (E (g',h'))) =  Get (E (\x- g' (g x),\x-h (h' x))) 

cast (Put s (E (g',h'))) = Put s (E (\x-g' (g x),\x-h (h'
x)))
cast (Bind s' k' (E (g',h'))) = Bind (cast s') (\x-k' (g x))
(E (\x-g' (g x),\x-h (h' x)))
in runState (cast (k (h1 s))) s
runState (Bind (Put s (E (g1,h1))) k (E (g,h))) _  
  = let cast (Return a) = Return (h a)
cast (Get (E (g',h'))) =  Get (E (\x- g' (g x),\x-h (h' x))) 

cast (Put s (E (g',h'))) = Put s (E (\x-g' (g x),\x-h (h'
x)))
cast (Bind s' k' (E (g',h'))) = Bind (cast s') (\x-k' (g x))
(E (\x-g' (g x),\x-h (h' x)))
in runState (cast (k (h1 ( s
runState (Bind (Bind m k1 (E (g1,h1))) k2 (E (g,h))) s 
  = runState m (\x - Bind (k1 x) k2 (E (g,h))) s 


{-
instance Monad (State s) where
(=) = Bind
return = Return

instance MonadState s (State s) where
get = Get
put

[Haskell-cafe] [Haskell] Lexically scoped type variables

2004-11-26 Thread Martin Sulzmann
Hi,

let me answer your questions by comparing what's implemented in Chameleon.
(For details see
http://www.comp.nus.edu.sg/~sulzmann/chameleon/download/haskell.html#scoped)

   QUESTION 1 -
  
  In short, I'm considering adopting the Mondrian/Chameleon rule for GHC.
  There are two variations
  
1a) In the example, 'a' is only brought into scope in the
right hand side if there's an explicit 'forall' written by
  the programmer
1b) It's brought into scope even if the forall is implicit; e.g.
   f :: a - a
   f x = (x::a)
  
  I'm inclined to (1a). Coments?
  

Currently, Chameleon goes for 1b), i.e. foralls are implicit. I agree
that 1a) might help the programmer to immediately see which variables
are bound by the outer scope.

  
  - QUESTION 2 --
  
  [...]
  
  The alternatives I can see are
  
  2a) Make an arbitrary choice of (A) or (B); GHC currently chooses (B)
  2b) Decide that the scoped type variables arising from pattern
   bindings scope only over the right hand side, not over
   the body of the let
  2b) Get rid of result type signatures altogether; instead,
use choice (1a) or (1b), and use a separate type signature
  instead.
  
  Opinions?
  

Chameleon goes for 2c)

A Chameleon speciality is that we can write

f ::: a-a
f x = True

f ::: a-a states that f has type a-a for some a.

::: follows the same scoping rules as ::

Then, the following statement

let f (x::[a],ys) = rhs
in body

(I assume that x::[a] states here that x has type [a] for some a)

can be encoded as

let f ::: ([a],b)-c
f (x::[a],ys) = rhs
in body


The main motivation behind Chameleon's lexically scoped annotations
was to allow for programs such as

class Eval a b where eval::a-b
f :: Eval a (b,c) = a-b
f x = let g :: (b,c)
  g = eval x
  in fst g

As Josef pointed out, there are also examples where it might be useful
that some inner annotations refer to variable a from the outer annotation.

Martin
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] A puzzle and an annoying feature

2004-11-26 Thread Martin Sulzmann
Lennart Augustsson writes:

  [...]
  
  But using functional dependencies feels like a sledge hammer,
  and it is also not Haskell 98.
  

Well, I'm simply saying that your proposed extension which is not
Haskell 98 can be expressed in terms of a known type class extension.

I agree that something weaker than FDs would be sufficient here.

Martin

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] [Haskell] A puzzle and an annoying feature

2004-11-25 Thread Martin Sulzmann

[Discussion moved from Haskell to Haskell-Cafe]

Hi,

Regarding

- lazy overlap resolution aka unique instances

Well, if there's only instance which is not exported, then you can
use functional dependencies.

Assume

class C a
instance ... = C t

Internally, use

class C a | - a
instance ... = C t
 

Furthermore, there seems to be an issue that has been overlooked so far.

- Providing sufficient type annotations

Recall

Lennart Augustsson writes:
  Here is a small puzzle.
  
  -- The following generates a type error:
  f :: Char - Char
  f c =
   let x = g c
   in  h x
  

Lennart magically inserted a type annotation to resolve the ambiguity.

  -- But this definition does not:
  f :: Char - Char
  f c =
   let x :: Bool
   x = g c
   in  h x
  

Clearly, there are a number of ways to resolve the ambiguity.
The Chameleon type debugger assists the user in identifying
which locations are responsible.

Here's the Chameleon type debugger error message generated
for the unannotated function f


temp.ch:2: ERROR: Inferred type scheme is ambiguous:
Type scheme: forall b. C b = Char - Char
  Suggestion: Ambiguity can be resolved at these locations
  b: f :: Char - Char
 f c = let x = g c
   ^   ^
   in  h x
   ^ ^
(The ^ correspond to highlightings of the Chameleon type debugger)


This may help the user to realize where and which annotations are
missing. E.g., each of the following annotations (there are more
possibilities) will do the job.

f :: Char - Char
f c =
 let -- x :: Bool (1)
 x = g c
 in  h x -- h (x::Bool) (2)
 -- (h::Bool-Char) x (3)


Martin

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe