Re: the MPTC Dilemma (please solve)

2006-02-17 Thread Ross Paterson
Martin Sulzmann <[EMAIL PROTECTED]> writes:
> - There's a class of MPTC/FD programs which enjoy sound, complete
>   and decidable type inference. See Result 1 below. I believe that
>   hugs and ghc faithfully implement this class.
>   Unfortunately, for advanced type class acrobats this class of
>   programs is too restrictive.

Not just them: monad transformers also fall foul of these restrictions.
The restrictions can be relaxed to accomodate them (as you do with the
Zip class), but the rules become more complicated.

> Result2:
> Assuming we can guarantee termination, then type inference
> is complete if we can satisfy
>- the Bound Variable Condition,
>- the Weak Coverage Condition, 
>- the Consistency Condition, and
>- and FDs are full.
> Effectively, the above says that type inference is sound,
> complete but semi-decidable. That is, we're complete
> if each each inference goal terminates.

I think that this is a little stronger than Theorem 2 from the paper,
which assumes that the CHR derived from the instances is terminating.
If termination is obtained via a depth limit (as in hugs -98 and ghc
-fallow-undecidable-instances), it is conceivable that for a particular
goal, one strategy might run into the limit and fail, while a different
strategy might reach success in fewer steps.

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Strict Haskell debate

2006-02-17 Thread Andy Gill


On Feb 17, 2006, at 3:30 PM, Ashley Yakeley wrote:


Andy Gill wrote:
I'd like to see a way of enforcing return strictness, that is  
where you

have confidence that what a function is returning is fully evaluated.
Imagine a function hstrict;
 hstrict :: a -> a


Is this like deepseq, that strictly evaluates internal structure  
using seq?


yes. it is.




With hstrict you can write functions in the style.
fun f a b c = hstrict $ 
  where
...
...


But surely fun can return the unevaluated thunk (hstrict x)? Since  
hstrict has not yet been called, it can't do its strictifying  
magic, whatever that is.




No. hstrict will always be called before returning. Evaluation does
not return thunks, they get created by lets/where (at the core level),
not by function application/evaluation.

Andy Gill

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Strict Haskell debate

2006-02-17 Thread Ashley Yakeley

Andy Gill wrote:

I'd like to see a way of enforcing return strictness, that is where you
have confidence that what a function is returning is fully evaluated.

Imagine a function hstrict;

 hstrict :: a -> a


Is this like deepseq, that strictly evaluates internal structure using seq?


With hstrict you can write functions in the style.

fun f a b c = hstrict $ 
  where
...
...


But surely fun can return the unevaluated thunk (hstrict x)? Since 
hstrict has not yet been called, it can't do its strictifying magic, 
whatever that is.


--
Ashley Yakeley

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Strict Haskell debate

2006-02-17 Thread Andy Gill

I'd like to see a way of enforcing return strictness, that is where you
have confidence that what a function is returning is fully evaluated.

Imagine a function hstrict;

hstrict :: a -> a

you can define hseq with this.

hseq :: a -> b -> b
hseq a b = hstrict a `seq` b

and $!!.

($!!) :: (a -> b) -> a -> b
a $!! b = a $! (hstrict b)

With hstrict you can write functions in the style.

fun f a b c = hstrict $ 
  where
...
...

and also

fun f a b c = hstruct $ bla { abc = def , cfg = tts , ... }

We know that strictness plays no part in the returned value
after the function is finished evaluating, which is a Useful Property.

What I'd like is a efficient hstrict, that does not re-evaluate anything
that has already been hstrict-ified. In the same way as we evaluate
an object, we could strictify an object using a similar (low-level)  
mechanism.
So the cost of calling hstrict would be amortized away to almost  
nothing.


How much work would this be to add hstrict GHC? A extra bit in the
runtime representation? Could we use the speculation mechanism to do it?

Andy Gill



___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: the MPTC Dilemma (please solve)

2006-02-17 Thread Isaac Jones
I'm forwarding an email that Martin Sulzmann asked me to post on his
behalf.


From: Martin Sulzmann <[EMAIL PROTECTED]>
Subject: MPTC/FD dilemma 

Here's my take on the MPTC/FD dilemma.

There has been lots of discussion about MPTC/FDs. They are very useful
no doubt. But they also show inconsistent behavior across different
Haskell platforms (e.g. see Claus Reinke's last email).

Claus Reinke's example is "interesting" because he mixes some advanced form
of FDs with overlapping instances. Something which has never been
studied before (besides a few email exchanges between Oleg and myself
on Haskell a few months ago). So, I propose to ignore overlapping
instances for the moment.

What follows is rather lengthy. The main points of this email are:

- There's a class of MPTC/FD programs which enjoy sound, complete
  and decidable type inference. See Result 1 below. I believe that
  hugs and ghc faithfully implement this class.
  Unfortunately, for advanced type class acrobats this class of
  programs is too restrictive.

- There's a more expressive class of MPTC/FD programs which enjoy
  sound and complete type inference if we can guarantee termination
  by for example imposing a dynamic check.  See Result 2 below. Again,
  I believe that  hugs and ghc faithfully implement this class if they
  additionally implement a dynamic termination check.
  Most type class acrobats should be happy with this class I believe.

- ATs (associated types) will pose the same challenges.
  That is, type inference relies on dynamic termination checks.

Here are the details.

Let's take a look at the combination of FDs and "well-behaved" instances.
By well-behaved instances I mean instances which are non-overlapping and
terminating. From now on I will assume that instances must be well-behaved.
The (maybe) surprising observation is that the combination
of FDs and well-behaved instances easily breaks completeness and
decidability of type inference. Well, all this is well-studied.
Check out
[February 2006] Understanding Functional Dependencies via Constraint
Handling Rules by Martin Sulzmann, Gregory J. Duck, Simon Peyton-Jones
and Peter J. Stuckey
which is available via my home-page.

Here's a short summary of the results in the paper:

Result 1:
To obtain sound (we always have that), complete and decidable type inference
we need to impose
- the Basic Conditions (see Sect4)
  (we can replace the Basic Conditions by any conditions which guarantees
   that instances are well-behaved. I'm ignoring here
   super classes for simplicity)
- Jones's FD restrictions and the Bound Variable Condition (see Sect4.1)

The trouble is that these restrictions are quite "severe". In fact,
there's not much hope to improve on these results. See the discussion
in Sect5.1-5.3.

Let's take a look at a simple example to understand the gist of the problem.

Ex: Consider

class F a b | a->b
instance F a b => F [a] [b] -- (F)

Assume some program text generates the constraint F [a] a.
Type inference will improve a to [b] (because of the 
functional dependency in connection with the instance).
Though, then we'll find the constraint F [[b]] [b]
which can be resolved by the instance to F [b] b.
But this is a renamed copy of the original constraint
hence, type inference will not terminate here.

If we translate the instance and improvement conditions of
the above program to CHRs the problem becomes obvious.

rule F a b, F a c  ==> b=c-- the FD rule
rule F [a] [a]<==> F a b  -- the instance rule
rule F [a] c   ==> c=[b]  -- the improvement rule

The rules should be read from left to right where
"==>" stands for propagation and "<==>" for simplification.

My point: The above improvement rule is (potentially) dangerous
cause we introduce a fresh variable b. And that's why type inference
may not terminate. Using the terminology from the paper.
The instance (F) violates one of Jones' FD restriction (the Coverage
Condition).

So, is all lost? Not quite. A second major result in the paper says
that if we can guarantee termination then we can guarantee completeness.
Of course, we need to find some appropriate replacement for 
Jones' FD restrictions.

Result2:
Assuming we can guarantee termination, then type inference
is complete if we can satisfy
   - the Bound Variable Condition,
   - the Weak Coverage Condition, 
   - the Consistency Condition, and
   - and FDs are full.
Effectively, the above says that type inference is sound,
complete but semi-decidable. That is, we're complete
if each each inference goal terminates.

Here's a short explanation of the conditions.
The Bound Variable Condition says that all variables which appear
in the instance head must appear in the instance body.
Weak Coverage says that any of the "unbound" variables (see b
in the above example) must be captured by a FD in the instance context.
This sounds complicated but is fairly simple. Please see the p

Re: Proposal: pattern synonyms

2006-02-17 Thread Niklas Broberg
> Most of this discussion on patterns (except for views)
> seems too much focused on concrete data types.
> (regexps for lists: what a horrible thought :-)
> This is just the thing to avoid in software design.
> (Don't expose implementation details, use interfaces etc.)

There's nothing in HaRP that would not work seamlessly with any
sequential datatype through an interface of destructors, and clearly
that's the Right (TM) way to go. The current implementation is just
proof of concept. :-)

IMO your comment only further speaks for my proposal to add guards to
pattern synonyms. With an interface of destructors, you could define
patterns that don't say anything about the underlying implementation,
e.g.

Head x = xs | x <- head xs

where the head function comes from some interface for sequences. This
is not something that can be done currently, nor with the initial
proposal for pattern synonyms.

/Niklas
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Proposal: pattern synonyms

2006-02-17 Thread Johannes Waldmann
Most of this discussion on patterns (except for views)
seems too much focused on concrete data types.
(regexps for lists: what a horrible thought :-)
This is just the thing to avoid in software design.
(Don't expose implementation details, use interfaces etc.)
On the other hand, this is exactly what keeps the refactoring people
and their tools happy: they will never run out of work
as long as the above advice isn't followed ...
-- 
-- Johannes Waldmann -- Tel/Fax (0341) 3076 6479/80 --
 http://www.imn.htwk-leipzig.de/~waldmann/ ---

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Proposal: pattern synonyms

2006-02-17 Thread Niklas Broberg
On 2/16/06, Henrik Nilsson <[EMAIL PROTECTED]> wrote:
> Conor and I discussed this over lunch.
>
> Specifically, we talked about whether the right hand side of a pattern
> synonym would be any Haskell pattern (including "_", "~", possibly "!"),
> or restricted to the intersection between the patterns and terms, as
> Conor propose that pattern synonyms also be used for construction.
>
> By adopting some simple conventions, like replacing "_" by "undefined"
> when a synonym is used as a term for construction, it is clear that one
> can be a bit more liberal than a strict intersection between the pattern
> and current expression syntax.

I would speak against this. I like the idea of pattern synonyms, but I
think they should be just that - pattern synonyms, and not try to mix
in expressions as well. With the current (H98) pattern situation, it
*might* be possible to tweak things so that all patterns have an
expressional meaning (e.g. "_" as "undefined"), but 1) it would be
fairly construed in many cases, and 2) it would complicate making
extensions to the pattern matching facility.

(Shameless plug:) In particular I'm thinking about our extension HaRP
[1] that adds regular expressions to pattern matching over lists. The
ability to define pattern synonyms would be really useful in
conjunction with HaRP, but if those patterns are required to also have
an expressional meaning it would make things fairly complicated, not
to say impossible.

Instead I would like to propose an extension to the proposed extension
in another direction: Adding in (pattern) guards. Consider patterns
like

IsSpace x = x | isSpace x
Last x = xs | x <- last xs

This would in particular go well together with HaRP, where you in some
cases need guards to be inlined, e.g.

words [ (IsSpace _)*!, (/ xs@:(_*), (IsSpace _)*! /)* ] = xs

Btw, why not consider adding regular patterns a la HaRP to Haskell'? :-)

(Disclaimer: I'm not really serious, like Lennart I don't really feel
that any of this has any place in Haskell'. But as for the future
beyond that, I am serious.)

[1] http://www.cs.chalmers.se/~d00nibro/harp
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime