Re: Records in Haskell

2012-01-26 Thread Anthony Clayden
Ryan Newton rrnewton at gmail.com writes:

 
 I admit I'm a big fan of polymorphic extension.  But I
don't love it enough 
for it to impede progress!  
 
 Regarding extension:  In trying to read through all this
material I don't 
see a lot of love for lacks constraints a la TRex.  
 Cheers,
   -Ryan
 
Hi Ryan, I think the lacks constraint is sadly
misunderstood, and in fact 
something like it will be needed eventually.

[A thousand apologies for the multiposts! somewhere between
gmaniac and pipermail,
 half my posts are going missing: I was only just warming up
this far. Try again ... again ... third time]


(If anybody who knows the internals of Hugs/TRex is
listening, it would be 
great to get confirmation of the following.)

As you say, it relates to whatever might happen for
polymorphic records, so is 
outside the scope of what SPJ is trying to address in this
thread.

On the other hand, let's try to avoid developing an approach
for 'Records in 
Haskell' that has to be undone when/if we get more
polymorphic.

From all the suggestions in the current thread, I'm
expecting the approach 
will include a Has class with methods get and set. It would
be sweet if in 
future the same Has class could be extended to extended(!)
records, anonymous 
records, renamed labels, projections on records, merged
records (as you'd get 
from a relational join), etc. Specifically:
  Has r l t = ... really must mean 
 there's exactly one field labelled l in record r,
at type t
 (which is a constraint on whatever merged/extended
term r is)
compare TRex's
  (r\l) = ... Rec {| l : t | r |} ...   which really
means
 there's exactly one field labelled l in the Rec, at
type t

In hindsight, I think it was unfortunate that the original
TRex paper [1] used 
the word lacks in context of its notation for constraining
the record 
structure. (I'm looking especially at section 2.1 'Basic
Operations'.) In all 
the operations, the types always include a Rec term with
_both_ l and r. They 
don't all have a term of a 'bare':
  Rec {| r |}
TRex is trying to avoid a positional specification of the
record fields (which 
is 'implementation determined'/hidden from the programmer).
But I think of 'bare' r as representing a record with a
'hole' at
whatever position l is in the full record. (So the
constraint (r\l) means: you're
going to find a Rec with exactly one l in it; if you also
find a Rec with 'bare'
r, that means the same Rec, but with a 'hole'.)

The HList guys picked up the word lacks, and adapted it
(successfully in 
context of what they were doing) to build 'Type Indexed
Hlist's -- that is, 
record-like structures with exactly one field labelled l.

Perhaps TRex should have used a notation something like:
   (rr : l @ t) = Rec {| rr |} ...   -- HasUnique
rr l t
   ... Rec {| rr \ l |} ...-- rr with a
hole in place of l


You say:
  As one anecdote, I've been very pleased using Daan
Leijen's scoped labels 
approach
My anecdote: the new approaches and extensions to type
inference in GHCi have 
been frustratingly slow in arriving and maturing. But we now
have working 
superclass constraints, including type equality (~), and
some heavy-weight 
type inference. I've built a toy (and somewhat limited)
polymorphic record 
system (with Has/get/set), which:
treats Data types as records; and
treats tuples as anonymous (type-indexed) records; and
implements project, extend and merge.
It relies on overlapping instances (so I don't mention it in
polite company -- 
at least it doesn't use FunDeps to boot!). I achieve the
effect of 'HasUnique' 
through instance resolution: if there's more than one
occurence of label l in 
the record term, GHC bitches. (This is fragile: I can't use
IncoherentInstances, and sometimes UndecidableInstances
gives trouble.)


[1] A Polymorphic Type System for Extensible Records and
Variants, Gaster/Mark 
P. Jones, 1996.





___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Some thoughts on Type-Directed Name

2012-01-28 Thread Anthony Clayden
 There is an effort underway to make Haskell's Records
 better. The discussion is ongoing on the ghc-users mail
 list, ...
 in the direction of making the most minimal changes
 possible to achieve some simple record name-spacing.
 
 Thanks,
 Greg Weber

Thank you Greg,

Yes I know, and I have been trying to follow along
(intermittently). Thank you for your attempts to marshall
the discussion.

What would really, really help me is for someone to have a
look at the 'solution' I posted to the difficulties SPJ saw
with the SORF approach. (I ref'd it in my reply to Steve.)

It seemed from my testing to address the needs. Since I got
it working in GHC 7.2.1, there's a good chance it will need
only minimal changes to implement (I'm thinking mostly
syntactic sugar) -- providing of course that it is workable
and generalisable enough.

It could possibly benefit from some of the new Kind-level
stuff in 7.4.1 (that SPJ used, but wasn't available to me at
the time).

I keep trying to make the time to write up the full proposal
on the Wiki. I see it as a 'tweak' to SORF. Given that I'm
supposed to have a day job, I'm reluctant to make time
until/unless someone else double-checks whether I'm barking
up the wrong tree.

Anthony




___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-01-28 Thread Anthony Clayden
 On Thu, Jan 26, 2012 at 8:02 PM, AntC
 anthony_clay...@clear.net.nz wrote:
 
  Ryan Newton rrnewton at gmail.com writes:
 
  
   I admit I'm a big fan of polymorphic extension.  But I
  don't love it enough
  for it to impede progress!
 
 
 Records proposals for Haskell have repeatedly foundered on
 the rocks of extensibility.  Meanwhile, it seems like
 years of experience with field extensibility in OO
 languages has shown that it's not an especially good idea,
 with authors on programming practice militating for
 information hiding instead.
 
 I don't think it's worth treading that path yet again in
 Haskell.
 
 -Jan

Jan, I agree we shouldn't try leaping forward to
extensibility yet. I disagree that we should abandon any
thoughts of it and produce a stopgap approach for records
that won't ever be extensible.

The 'proof of concept' I posted to the list last month
http://www.haskell.org/pipermail/glasgow-haskell-users/2011-December/021298.html
already includes get and set over polymorphic records and
polymorphic fields, I believe, which is half way there.

I love Haskell for the way it learns from well-structured
mathematical approaches -- in the way I feel OO doesn't.
There is a well-structured mathematically sound approach for
extensibility, as it happens. It dates back to 1969. It is
the 'engine' behind large-scale programming systems all over
the world every day. I do and have worked with a lot of
them.

It's called Relational Algebra, it's based on set theory,
it's declarative - which should fit smoothly with Haskell.
It has an operation to extend records called 'extend' (!).
It has an operation to merge records called 'join'. It has
an operation to concatenate records called 'cross-product'

You'll probably know it by its 'awkward cousin' SQL. There
are many reasons for hating SQL, and there are many reasons
why it's a bad fit to OO -- especially because SQL is
declarative.

There are many reasons to go back to the better-founded
mathematical basis that pre-dates SQL.

AntC

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: simple extension to ghc's record disambiguation rules

2012-02-17 Thread Anthony Clayden
 Hi, I'd like to propose an extremely simple extension to
 ghc's record disambiguation rules,
 

John, I've just posted a proposal on the 'Records in
Haskell' wiki that I think will do the job for you.
Declared Overloaded Record Fields (DORF). I'd appreciate
feedback.


 my motivation is that I often have record types with
 multiple constructors but common fields.
 

This is exactly the use case I'm aiming at.

 
 so, my proposal is that when you come across something
 like
 
 (e::RecType) { blah = foo }
 
 (with an explicit type signature like shown)
 ...

My proposal is the same, except that you don't need a type
sig (or not always).

That record syntax desugars to a call to `set', which is a
method of class `Has' (which also has method `get').

Has/get/set are overloaded for each record/field
combination. The instance is generated at the record
declaration, instead of the H98 record selector function.

 
 It is also backwards compatible for expressions, but would
 be a new thing for patterns which generally don't allow
 type signatures there.
 

DORF is backwards compatible with H98 record selectors, in
the sense that polymorphic selectors (using `get') are just
functions.


 It sidesteps type checker interactions by only being
 triggered when an explicit type annotation is included.
 

(DORF uses usual instance resolution by record type and
field.)

 ideally it would be combined with the 'update' and
 'label-based pattern-matching' extensions from this page

http://hackage.haskell.org/trac/haskell-prime/wiki/ExistingRecords
 
 John
 

DORF supports update and pattern matching by label, per
-XDismabiguateRecordFields and friends.

Anthony


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-24 Thread Anthony Clayden
 Actually, I looked at the SORF page again:

http://hackage.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields
 There is now an 'Alternative Proposal' section that claims
 to solve polymorphic update.
 If anyone has comments on this please let us know!

Yes, Greg the quasifunctor stuff has been there for some
time.

You seem to be confused between polymorphic fields vs
type-changing update. Quasifunctor is talking about
type-changing update. (Look at the example given for `data R
a`: there's no polymorphic field.)

DORF already has a working approach to type-changing update.

I make reference to quasifunctor in the DORF proposal.
Essentially, if quasifunctor can be made to work for SORF,
it can also be made to work for DORF. (Because what
underlies both is to use a Has class with methods get/set.)

No, I don't think anybody has a satisfactory approach to
updating polymorphic/higher-ranked fields. (DORF mentions
one, but it's a ghastly hack.

In addition to higher-ranked types, SPJ is also concerned
about h-r's with constraints. I've added a very speculative
piece in DORF's comparison to SORF that considers using
Constraint Kinds. (It's probably piling ghastliness upon
ghastliness.)

 ... and not automatically
 abstract over fields. This leaves all of our future
 options open while satisfying the narrow issue at hand.
 

What on earth do you mean by not automatically abstract
over fields?

AntC


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-25 Thread Anthony Clayden
Whoa! suddenly a deluge over the DORF proposal.

I don't have time to reply fully now, but I must say: Barney
you have got it all wrong.

No, DORF does not attach one class to each label. There is
only one class 'Has', with methods get and set. Each record
decl generates an instance for the combination of
record/field. You can't mix declared and free-standing
labels in the same record. The switch for DORF is at the
module level: in a module either all records and labels use
DORF, or none do (that is, they use H98 style with each
field name being unique).

AntC

- Original Message Follows -
  My objection is that I'm not sure if there is ever a
  case where you really want things to be polymorphic
 over all records.
 
 Well, I don't have a simple, really convincing example,
 but there are certainly things I want to play with. More
 importantly, DORF automatically attaches one class to each
 label, but this is often not what you want. For example,
 if you have two fields firstname and lastname the
 associated classes are less useful: what you really want
 is 
 
   class (Has r firstname String, Has r lastname
 String) = HasPersonalName r
 
 so that you can define
 
 fullname :: HasPersonalName r = r - String
 fullname r = r.firstname ++   ++ r.lastname
 
 You may also want to define subclasses to express more
 specific conditions. In general, the compiler cannot
 automatically deduce what is semantically important: you
 need to define it yourself. The Has class is the base on
 which you can build.
 
  It doesn't seem like the
  Haskell way to have the less safe thing as the one
  that's default and convenient, and to allow the
  programmer to layer a more-safe thing on top of it if he
  or she wants to. It seems more like the Haskell way to
 have the safer thing be the default and to require extra
  work if you want to do something less safe*.
 
 I think you are using the word safe in a slightly
 misleading way. None of this is mathematically unsafe,
 because projections are natural (truly polymorphic). The
 safety that is broken here is nothing to do with the
 semantics of the language, it is to do with the semantics
 of the system being implemented, and that is something the
 compiler cannot infer. As my example above shows, it
 doesn't always correspond one to one with the labels.
  
 The Haskel way is to make things as polymorphic as is
 mathematically safe, even when this goes beyond the
 programmers original intention. You can then restrict this
 polymorphism by giving explicit less general types in the
 same way as in my examples. I think my approach is more
 Haskel like.
 
 Another important Haskel design consideration is to reuse
 parts of the language where possible, rather than
 introduce new structures. Type classes were originally
 introduced to deal with equality and numeric functions,
 but were reused for many things including monads. My
 approach achieves the same as DORF (and more), but using
 existing language features instead of introducing new
 ones.
 
 Barney.
 
 
 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org

http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Records in Haskell

2012-02-25 Thread Anthony Clayden

Wren/all

Please remember SPJ's request on the Records wiki to stick
to the namespace issue. We're trying to make something
better that H98's name clash. We are not trying to build
some ideal polymorphic record system.

To take the field labelled name: in H98 you have to
declare each record in a different module and import every
module into your application and always refer to name
prefixed by the module.

DORF doesn't stop you doing any of that. So if you think of
each name being a different meaning, carry on using
multiple modules and module prefixes. That's as easy (or
difficult) as under H98.

You can declare fieldLabel name in one module, import it
unqualified into another and declare more records with a
name label -- contrary to what somebody was claiming.

Or you can import fieldLabel name qualified, and use it as
a selector function on all record types declared using it.
It's just a function like any other imported/qualified
function, for crying out loud!

So if there's 'your' name label and 'my' name, then use
the module/qualification system as you would for any other
scoped name. Then trying to apply My.name to Your.record
will get an instance failure, as usual.

(And by the way, there's no DORFistas, let's avoid
personalising this. There are people who don't seem to
understand DORF -- both those criticising and those
supporting.)

AntC

- Original Message Follows -
 On 2/25/12 10:18 AM, Gábor Lehel wrote:
  On Sat, Feb 25, 2012 at 3:54 PM, Barney
 Hilkenb.hil...@ntlworld.com  wrote:  After more
 pondering, I finally think I understand what the DORFistas
 want. Here is an example: 
  You want to define records which describe people, and
 include (among other things) a field called name. There
 might be several different record types with a name field,
 depending on whether the record refers to a customer, an
 employee, a business contact etc., but in each case name
 is the name of the person to which the record refers. You
 then write various functions which assume this, such as 
 spam :: Has r name String =  r -  String
 spam r = Dear  ++ r.name ++ \nHave you
 heard... 
  Now I want to define records which describe products,
 and I also use a field name in the same way, except that
 it is the brand name of the product. I also define
 functions such as 
 offer :: Has r name String =  r -  String
 offer r = Reduced!  ++ r.name ++  50%
off!
 
  It doesn't make any sense to apply your functions to my
 records or vice-versa, but because we both chose the same
 label, the compiler allows it. Putting the code in
 separate modules makes no difference, since labels are
 global. 
  Exactly!
 
 FWIW, this is the concern I alluded to earlier. Namely
 that we may want  to have two (or more), er, 'classes' of
 records--- where a field is  polymorphic over an
 individual class, but we don't want those classes to 
 merge simply because they happened to choose the same name
 and type for  the field.
 
 I'm not sure it's a good proposal, but it seems like the
 only way to  handle this issue is to (1) introduce a new
 kind for  semantically-oriented field names, and (2) make
 the Has class use that  kind rather than a type-level
 string. By (1), what I mean is that rather  than referring
 to the field as name, we would declare PersonalName and 
 BrandName and then use those in lieu of the string. And if
 we do that,  then (2) demands that we must somehow make
 explicit which one we mean,  should we want the `name`
 field to be polymorphic for some given record 
 declaration.
 
 -- 
 Live well,
 ~wren
 
 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org

http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


[ghc-devs]: Explicit inequality evidence

2016-12-17 Thread Anthony Clayden
[transferring to -users, because there's a much wider
discussion]

> > On Dec 13, 2016, at 15:04, Richard Eisenberg wrote:
> > I've thought about inequality on and off for years now,

The subject has appeared (in various guises) on Haskell
forums since well before 2002 [1]
-- which went into the 'OutsideIn(X)' model.
Search the cafe on 'type disequality', for example.

 
> >> On Tue, Dec 13, 2016 at 12:49 AM, Oleg Grenrus wrote:
> >> First the bike shedding: I’d prefer /~ and :/~:.
 
And yes, usually discussed with /~ as the type inequality
operator, since at least when ~ was introduced for type
equality.
 
> > ... but it's a hard nut to crack.
 
Which is presumably why spj has never shown any interest.
 
> > ... need evidence of inequality in Core, and
> > a brand-spanking-new type safety proof. ...
 
One rule of inference that David/Oleg haven't mentioned:
 
If x /~ y and y ~ z then x /~ z.
 
How does this go with (potentially) infinite type (family)s?

Thanks Richard for the refs on type safety proofs.
I wonder if anything in type safety relies on inequalities?
(This would be with, say, overlaps + fundeps extensions.)

FunDep 'confluence' (which Richard has re-christened
'coincident overlap' for closed type families),
surely relies on proving at some use site
that the types can never unify with such-and-such instance
(or type family equation).

For example if we have
instance C a a where ...
instance C a b where...
We have to prove at a use site that the two types
cannot unify, to justify picking the second instance.

This goes badly with separate compilation:
suppose the `C a a` instance is not visible in every module.

 
I would love type inequality guards on instances
to be explored as an alternative approach for overlap.
(See some of my reponses to Richard [2].)

In my example above:
instance C a b | a /~ b where ...
So the invisibility of the `C a a` instance would not upset
any use sites.
 
 
 
[1] Sulzmann and Stuckey 2002 'A theory of Overloading'
[2]
https://typesandkinds.wordpress.com/2013/04/29/coincident-overlap-in-type-families/

 
AntC
 
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: [ghc-devs]: Explicit inequality evidence

2016-12-21 Thread Anthony Clayden
> On Dec 13, 2016, at 1:02 AM, David Feuer  wrote:
> 
>> On Tue, Dec 13, 2016 at 12:49 AM, Oleg Grenrus
 wrote:

>> 
>> I assume that in your rules, variables are not type
families, otherwise
>> 
>> x /~ y => f x /~ f y doesn't hold if `f` isn't injective.
(e.g. type family F x where F x = ())
>> other direction is true though.
> 
> I was definitely imagining them as first-class types; your
point that
> f x /~ f y => x /~ y even if f is a type family is an
excellent one.
>

Hmm, yes except: how would evidence ever arise that f x /~ f
y ?

Would we ever get a 'wanted' constraint to that effect?

More likely, we'd be trying to discriminate between
instances
in which picking some instance depends on f x /~ f y.

I don't see that any of the overlap/closed type family work
has got us away from the 'groundedness issues' 
observed in the HList paper.

We have to improve both types to a grounded type constructor
to get the evidence they're not equal.
(Or at least that parameters in the same position are not
equal,
 and that the type constructors are not familys and are the
same arity.)


AntC

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: DeriveFoldable treatment of tuples is surprising

2017-03-24 Thread Anthony Clayden
> On Wed Mar 22 13:54:05 UTC 2017, Twan van Laarhoven wrote:

>> On 2017-03-21 21:34, David Feuer wrote:
>> This seems much too weird:
>>
>> *> :set -XDeriveFoldable
>> *> data Foo a = Foo ((a,a),a) deriving Foldable
>> *> length ((1,1),1)
>> 1
>> *> length $ Foo ((1,1),1)
>> 3

Hmm.
*> length $ Just ((1, 1), 1)
1
*> length $ Just (1, 1)
1
*> length (1, 1)
1


> This is not unique to tuples, consider:

>  > :set -XDeriveFoldable
>  > data Foo a = Foo [[a]] deriving Foldable
>  > length [[1,2]]
>  1
>  > length $ Foo [[1,2]]
>  2

   > length $ Just [[1, 2]]
  1

Does the behaviour of other methods within Foldable
seem surprising for DeriveFoldable Foo a = Foo ((a, a), a)?

Did the FTP change touch DeriveFoldable?
(Silly question, yes it must have: `length` didn't used to
be in Foldable.)

> On Tue Mar 21 21:29:20 UTC 2017, Edward Kmett wrote:
> In general I think the current behavior is the least
surprising as it
> "walks all the a's it can" and is the only definition
compatible with
> further extension with Traversable. Right now there are no
instances
> provided by base that violate the "walk all the a's"
intuition 

?? Are there instances in base
where the `a` appears more than once on RHS?

I can see only List and Array.
How would I have formed that "walk all the a's" intuition?
(For most of the Prelude instances, 
 `length`s result is hard-coded as 0 or 1.)

*> length (((1, 2), 3) :: Num a => ((a, a), a))
1
Doesn't seem to "walk all the a's" there.

I find pretty much all of these results surprising.
(Admittedly for different reasons in different cases.)

I think this is a good reason `length` should not be in
Foldable.
"lengthiness" just doesn't fit the abstraction.


AntC
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Why isn't this Overlapping?

2017-04-16 Thread Anthony Clayden
--ghc 7.10 or 8.0.1

{-# LANGUAGE DataKinds, KindSignatures, GADTs,
  MultiParamTypeClasses,
FunctionalDependencies, FlexibleInstances,
  UndecidableInstances,
NoOverlappingInstances   #-}

class TypeEq a a' (b :: Bool) | a a' -> b

instance (b ~ True) => TypeEq a a b
instance (b ~ False) => TypeEq a a' b

Those two instance heads are nearly identical, surely they
overlap?
And for a type-level type equality test, they must be
unifiable.
But GHC doesn't complain.

If I take off the FunDep, then GHC complains.

AFAICT none of those extensions imply Overlaps,
but to be sure I've put NoOverlapping.


AntC
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: Why isn't this Overlapping?

2017-04-17 Thread Anthony Clayden
> On Tue Apr 18 00:50:20 UTC 2017, Iavor Diatchki wrote:
> 
> these two instances really should be rejected as they
> violate the FD of the class: we can derive `TypeEq a a
> True` using the first instance and `TypeEq a a False`
> using the second one.  Unfortunately, the check that we
> are using to validate FDs when `UndecidableInstances` is
> on, is not quite correct (relevant tickets are #9210 and
> #10675 where there are similar examples).
> 

Thanks Iavor, it was a propos those tickets I'm asking.
See my comments on #10675 -- we'll have to agree
to disagree on "not quite correct".
(If you want to ban instance selecting on type equality,
 you'll make a lot of people grumpy.)

My surprise here is why GHC doesn't complain about overlaps.
(Separately from whether it's doing the right thing.)

Here's another example [GHC 7.10] and here I agree
the FunDep consistency is well broken,
again needs UndecidableInstances:

{-# LANGUAGE  MultiParamTypeClasses, GADTs,
   FunctionalDependencies,
   FlexibleInstances,
UndecidableInstances   #-}

class C a b  | a -> b   where 
  foo :: a -> b -> String

instance C Int Bool where foo _ _ = "Bool called"

{- So far so good: no bare typevars in instance head;
don't need UndecidableInstances for that   -}

instance  (b ~ Char) => C Int b where
  foo _ _ = "b ~ Char called"

I can't write that instance head `C Int Char`.
GHC complains inconsistent with FunDep [quite correct].

But I can call `foo (5 :: Int) 'c'`  ==> "b ~ Char called".

If I call `foo (5 :: Int) undefined` ==> GHC complains of
overlaps.
[I would say fair enough too, except ...]

If I change the instance to

instance {-# OVERLAPPABLE #-}  (b ~ Char) => C Int b where
..

Then `foo (5 :: Int) undefined` ==> "Bool called"

So GHC both uses the FunDep to improve the type,
and uses the improvement to select an instance;
and yet seems blind to FunDep inconsistencies in the
instance decls.


AntC

> 
> > On Sun, Apr 16, 2017 at 12:13 AM, Anthony Clayden wrote:
> > 
> > --ghc 7.10 or 8.0.1
> >
> > {-# LANGUAGE DataKinds, KindSignatures, GADTs,
> >   MultiParamTypeClasses,
> > FunctionalDependencies, FlexibleInstances,
> >   UndecidableInstances,
> > NoOverlappingInstances   #-}
> >
> > class TypeEq a a' (b :: Bool) | a a' -> b
> >
> > instance (b ~ True) => TypeEq a a b
> > instance (b ~ False) => TypeEq a a' b
> >
> > Those two instance heads are nearly identical, surely
> > they overlap?
> > And for a type-level type equality test, they must be
> > unifiable.
> > But GHC doesn't complain.
> >
> > If I take off the FunDep, then GHC complains.
> >
> > AFAICT none of those extensions imply Overlaps,
> > but to be sure I've put NoOverlapping.
> >
> >
> > AntC
> > ___
> > Glasgow-haskell-users mailing list
> > Glasgow-haskell-users@haskell.org
> >
>
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users
> >
> 
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


RE: Why isn't this Overlapping?

2017-04-18 Thread Anthony Clayden
> On Tue Apr 18 10:31:30 UTC 2017, Simon Peyton Jones wrote:
>
> Moreover, as discussed in the user manual section,
> GHC doesn’t complain about overlapping instances at the
instance decl,
> but rather where the instances are used.

Thank you Simon, yes I knew that, so I'd written a usage
(just didn't bother putting it in the message ):

foo :: (TypeEq a a' b) => a -> a' -> String
foo _ _ = "blah"

x = foo 'c' "String"

> That’s why there is no overlap complaint here

I didn't get a complaint about `x`, contrary to what I
expected.

On trying again just now:

y = foo 'c' 'd'

GHC _does_ complain of overlap.

I apologise for the distraction.


AntC

>> On 18 April 2017 01:50, Iavor Diatchki wrote
 
>> these two instances really should be rejected as they
violate the FD of the class:
>> we can derive `TypeEq a a True` using the first instance
and `TypeEq a a False`
>> using the second one.  Unfortunately, the check that we
are using
>> to validate FDs when `UndecidableInstances` is on,
>> is not quite correct (relevant tickets are #9210 and
#10675
>> where there are similar examples).


>>> On Sun, Apr 16, 2017 at 12:13 AM, Anthony Clayden wrote:

>>> --ghc 7.10 or 8.0.1
>>>
>>> {-# LANGUAGE DataKinds, KindSignatures, GADTs,
>>>  MultiParamTypeClasses,
>>>  FunctionalDependencies, FlexibleInstances,
>>>  UndecidableInstances, NoOverlappingInstances  
#-}

>>> class TypeEq a a' (b :: Bool) | a a' -> b
>>>
>>> instance (b ~ True) => TypeEq a a b
>>> instance (b ~ False) => TypeEq a a' b

>>> Those two instance heads are nearly identical,
>>> surely they overlap?
>> And for a type-level type equality test,
>>> they must be unifiable.
>>> But GHC doesn't complain.
>>>
>>> If I take off the FunDep, then GHC complains.
>>>
>>> AFAICT none of those extensions imply Overlaps,
>>> but to be sure I've put NoOverlapping.


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: Trouble with injective type families

2017-07-06 Thread Anthony Clayden
> On Wed Jul 5 14:16:10 UTC 2017, Richard Eisenberg wrote:

> I'd like to add that the reason we never extended System
FC
> with support for injectivity is that the proof of
soundness
> of doing so has remained elusive.

Thank you Richard, Simon.

IIRC the original FDs through CHRs paper did think it sound
to allow given `C a b1` and `C a b2` conclude `b1 ~ b2`
under a FunDep `a -> b`.
(I think that was also the case in Mark Jones'
 original paper on FunDeps.)

(See Iavor's comments on Trac #10675, for example.)

I know GHC's current FunDep inference is lax,
because there's good reasons to want 'wiggle room'
with FunDep (in)consistency.

I'm suggesting we could tighten that consistency check;
then we might make make FD inference stronger(?)
https://github.com/ghc-proposals/ghc-proposals/pull/56#issuecomment-312974557


AntC

>> Am Mittwoch, den 05.07.2017, 06:45 + schrieb Simon
Peyton Jones:
>> Functional dependencies and type-family dependencies only
induce extra
>> "improvement" constraints, not evidence.  For example
>> 
>>  class C a b | a -> b where foo :: a -> b
>>  instance C Bool Int where ...
>> 
>>  f :: C Bool b => b -> Int
>>  f x = x -- Rejected
>> 
>> Does the fundep on 'b' allow us to deduce (b ~ Int),
GADT-like, in the
>> body of 'f', and hence accept the definition.  No, it
does not.  Think
>> of the translation into System F. We get
>> 
>>  f = /\b \(d :: C Bool b). \(x::b).  x |> ???
>> 
>> What evidence can I used to cast 'x' by to get it
>> from type 'b' to Int?
>> 
>> Rather, fundeps resolve ambiguity.  Consider
>> 
>>  g x = foo True + x
>> 
>> The call to 'foo True' gives rise to a "wanted"
constraint (C Bool
>> beta), where beta is a fresh unification variable.  Then
by the fundep
>> we get an "improvement" constraint (also "wanted") (beta
~ Int). So we
>> can infer g :: Int -> Int.
>> 
>> 
>> In your example we have
>> 
>>x :: forall a b. (T Int ~ b) => a
>>x = False
>> 
>> Think of the System F translation:
>> 
>>x = /\a b. \(d :: T Int ~ b). False |> ??
>> 
>> Again, what evidence can we use to cast False to 'a'.
>> 
>> 
>> In short, fundeps and type family dependencies only add
extra
>> unification constraints, which may help to resolve
ambiguous
>> types.  They don’t provide evidence.  That's not to say
that they
>> couldn't.  But you'd need to extend System FC, GHC's core
language, to
>> do so.
>> 
>> Simon
>> 
>> 
>>> 
>>> -Original Message-
>>> From: Glasgow-haskell-users
[mailto:glasgow-haskell-users-
>>> bounces at haskell.org] On Behalf Of Wolfgang Jeltsch
>>> Sent: 05 July 2017 01:21
>>> To: glasgow-haskell-users at haskell.org
>>> Subject: Trouble with injective type families
>>> 
>>> Hi!
>>> 
>>> Injective type families as supported by GHC 8.0.1 do not
behave like
>>> I
>>> would expect them to behave from my intuitive
understanding.
>>> 
>>> Let us consider the following example:
>>> 
 
 {-# LANGUAGE RankNTypes, TypeFamilyDependencies #-}
 
 class C a where
 
 type T a = b | b -> a
 
 instance C Bool where
 
 type T Bool = Int
 
 type X b = forall a . T a ~ b => a
 
 x :: X Int
 x = False
>>> I would expect this code to be accepted. However, I get
the
>>> following
>>> error message:
>>> 
 
 A.hs:14:5: error:
 • Could not deduce: a ~ Bool
   from the context: T a ~ Int
 bound by the type signature for:
x :: T a ~ Int => a
 at A.hs:13:1-10
   ‘a’ is a rigid type variable bound by
 the type signature for:
   x :: forall a. T a ~ Int => a
 at A.hs:11:19
 • In the expression: False
   In an equation for ‘x’: x = False
 • Relevant bindings include x :: a (bound at
A.hs:14:1)
>>> This is strange, since injectivity should exactly make
it possible
>>> to
>>> deduce a ~ Bool from T a ~ Int.
>>> 
>>> Another example is this:
>>> 
 
 {-# LANGUAGE GADTs, TypeFamilyDependencies #-}
 
 class C a where
 
 type T a = b | b -> a
 
 instance C Bool where
 
 type T Bool = Int
 
 data G b where
 
 G :: Eq a => a -> G (T a)
 
 instance Eq (G b) where
 
 G a1 == G a2 = a1 == a2a
>>> I would also expect this code to be accepted. However, I
get the
>>> following error message:
>>> 
 
 B.hs:17:26: error:
 • Could not deduce: a1 ~ a
   from the context: (b ~ T a, Eq a)
 bound by a pattern with constructor:
G :: forall a. Eq a => a -> G (T
a),
  in an equation for ‘==’
 at B.hs:17:5-8
   or from: (b ~ T a1, Eq a1)
 bound by a pattern with constructor:
G :: forall a. Eq a => a -> G (T
a),
  in an equation for ‘==’
 at B.hs:17:13-16
   ‘a1’ is a rigid type variable bound by
   

Re: Superclass Equality constraints cp FunDeps

2017-04-28 Thread Anthony Clayden
> On Sat Apr 29 02:23:10 UTC 2017, Richard Eisenberg wrote: 
>
> I'm not quite sure what a restriction on (~) might be,

Thanks Richard,

I was thinking that FunDeps are restricted to bare type
vars.

I can't write either of these:

> class C a b c | a -> (b, c)   -- per my O.P. (~)

> class C a b c | a -> (b c)

So it's the one place where type vars `b c` like this:

> class C a b c | a -> b c

doesn't mean applying `b` to `c`.


> but (~) is effectively declared as
> 
> > class a ~ b | a -> b, b -> a
> 
> So I agree with your observations.

Aha! So I could equivalently go:

> class EqC a b | a -> b, b -> a
>
> class (EqC a (b, c)) => C a b c where ...
>   -- needs FlexibleInstances

And that does indeed work equivalently to the (~).
Again, I haven't put FunDeps on class `C`.

So should I reasonably have known that
a superclass constraint 
with FunDeps on the superclass
induces FunDeps on the sub-class
without explicitly needing to declare so?

(I'm not complaining, more surprised/impressed.)


AntC

> > On Apr 27, 2017, at 8:14 PM, Anthony Clayden
> > <anthony_clay...@clear.net.nz> wrote: 
> > The docos say [User Guide 10.14.1. on Equality
> > Constraints] 
> >> Equality constraints can also appear in class and
> > instance contexts.
> >> The former enable a simple translation of programs
> >> using functional dependencies into programs using
> > family synonyms instead. 
> >
>
http://downloads.haskell.org/~ghc/8.0.2/docs/html/users_guide/glasgow_exts.html#equality-constraints
> > 
> > And the forms of constraint seem quite sophisticated.
> > I was surprised (pleased) I could do this:
> > 
> > {-# LANGUAGE   MultiParamTypeClasses, TypeFamilies,
> > FlexibleInstances #-}
> > 
> > type family F a
> > 
> > class (F a ~ (b, c) ) => C a b c   where   -- (b c)
> >  !! f1 :: a -> b
> >  f2 :: a -> c
> > 
> > Uses of `f1` happily improve the type for `b`.
> > Uses of `f2` happily improve the type for `c`.
> > 
> > But I didn't declare a Functional Dependency.
> > (It seems to do no harm if I add `| a -> b c`.)
> > 
> > GHC's behaviour seems stronger than a "simple
> > translation". It seems entirely equivalent to a FunDep.
> > 
> > Or is there something I'm missing?
> > (I could have overlapping instances,
> > but only providing the equations for
> > type family `F` are confluent.)
> > 
> > Are there restrictions on the form of Equality
> > Constraints to get them to behave as FunDeps?
> > (It's not merely a bare typevar on one side.)
> > 
> > AntC
> > ___
> > Glasgow-haskell-users mailing list
> > Glasgow-haskell-users@haskell.org
> >
>
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users
> 
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Superclass Equality constraints cp FunDeps

2017-04-27 Thread Anthony Clayden
The docos say [User Guide 10.14.1. on Equality Constraints]

> Equality constraints can also appear in class and instance
contexts.
> The former enable a simple translation of programs using
> functional dependencies into programs using family
synonyms instead. 
http://downloads.haskell.org/~ghc/8.0.2/docs/html/users_guide/glasgow_exts.html#equality-constraints

And the forms of constraint seem quite sophisticated.
I was surprised (pleased) I could do this:

{-# LANGUAGE   MultiParamTypeClasses, TypeFamilies,
 FlexibleInstances #-}

type family F a

class (F a ~ (b, c) ) => C a b c   where   -- (b c) !!
  f1 :: a -> b
  f2 :: a -> c

Uses of `f1` happily improve the type for `b`.
Uses of `f2` happily improve the type for `c`.

But I didn't declare a Functional Dependency.
(It seems to do no harm if I add `| a -> b c`.)

GHC's behaviour seems stronger than a "simple translation".
It seems entirely equivalent to a FunDep.

Or is there something I'm missing?
(I could have overlapping instances,
 but only providing the equations for
 type family `F` are confluent.)

Are there restrictions on the form of Equality Constraints
to get them to behave as FunDeps?
(It's not merely a bare typevar on one side.)

AntC
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: Superclass Equality constraints cp FunDeps

2017-04-30 Thread Anthony Clayden
> On at Apr 29 05:55:14 UTC 2017, Anthony Clayden wrote:  
> ...
> So should I reasonably have known that
> a superclass constraint 
> with FunDeps on the superclass
> induces FunDeps on the sub-class
> without explicitly needing to declare so?
> 
> (I'm not complaining, more surprised/impressed.)
> 

From experimenting, FunDeps seem to come through
from a superclass of a superclass of a superclass of the
class.

This from David Feuer says
http://stackoverflow.com/questions/35252562/find-an-element-in-an-hlist/35260970#35260970

"superclass context is critical 
 GHC always commits to an instance before checking the
instance constraints,
 but it doesn't even try to find an instance
 until after solving the superclass constraints.
 So we need to use the superclass constraint to fix ..."

Presumably that's the mechanism that induces FunDeps
from supeclass constraints.

Is that behaviour officially documented somewhere?

(I remember when Type Families and (~) Constraints
 were first released, they weren't available superclass.
 Then there were a couple of releases
 you could put superclass constraints,
 but they weren't effective.)


AntC
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Proposal: Instance apartness guards

2017-06-23 Thread Anthony Clayden
After years of pondering this idea (in various forms), 
and several rounds of discussion on several forums, 
I've written it up.

"This proposal tackles the thorny topic of Overlapping
instances,
 for both type classes and Type Families/Associated types,
 by annotating instance heads with type-level apartness
Guards.
 Type-level disequality predicates appear in Sulzmann &
Stuckey 2002;
 in the type-level ‘case selection’ in HList 2004;
 and in various guises in Haskell cafe discussions in
following years.
 This proposal builds on the apartness testing implemented
 as part of the Closed Type Families work."

All feedback welcome.

https://github.com/AntC2/ghc-proposals/blob/instance-apartness-guards/proposals/-instance-apartness-guards.rst

AntC
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: Superclass Equality constraints cp FunDeps

2017-05-07 Thread Anthony Clayden
Now that I've got the bit between my teeth ...

Superclass constraints are not subject to the Paterson
conditions.
IOW I can write superclass constraints
that are not permitted as instance constraints.
(Superclass constraints are required to be
 non-cyclic, which ensures they're terminating.)

Is that worth adding to the docos?

Something like this is OK:

> class (F a b ~ b) => C a b ...
> -- equivalently
> class (D a b b) => C a b ...

Can I think of a use for that? Maybe ...

Sometimes even though you have a type function,
you can use knowledge of the result
to 'improve' the parameters.

The classic case is adding type-level Naturals.

Maybe even type-level Boolean And:
- if the result is True, so must be the params.
- if the result is False,
  and you know one param is True,
  the other must be False.
- but that can't be a function, because
  if the result is False
  and you know one param is False,
  that tells nothing about the other param.


AntC

> > On Sun Apr 30 19:45:34 UTC 2017, Richard Eisenberg
> wrote:
> 
> > Documentation is just about always suboptimal -- but the
> > best people to suggest concrete improvements are those
> > who were confused to begin with. So, by all means,
> > submit patches!
> 
> OK. Done. See #13657.
> 

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: Superclass Equality constraints cp FunDeps

2017-05-06 Thread Anthony Clayden
> On Sun Apr 30 19:35:17 UTC 2017 Brandon Allbery wrote:

>> On Sun, Apr 30, 2017 at 3:31 PM, Richard Eisenberg wrote:
>>>
>>> On Apr 30, 2017, at 6:37 AM, Anthony Clayden wrote:
>>> Is that behaviour officially documented somewhere?
>>
>> Not that I can find. ...
> 
> ... the originally cited behavior of
> equality constraints could also stand to be documented.
..

Thanks Brandon, I've added a request to #10431 [q.v.].

Actually, the required flags are documented,
just in an obscure place.
Anyway it's not too bad:
GHC's error message tells you what to do.


AntC
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: Superclass Equality constraints cp FunDeps

2017-05-06 Thread Anthony Clayden
> On Sun Apr 30 19:45:34 UTC 2017, Richard Eisenberg wrote:

> Documentation is just about always suboptimal -- but the
> best people to suggest concrete improvements are those who
> were confused to begin with. So, by all means, submit
> patches!

Thanks for the invite ;-).

OK. Done. See #13657.

I've put suggested wording, please improve!

The trickier problem is where to put the wording;
the 'feature' is really at the intersection of
several extensions.

> 
> Some relevant discussion on this point is on
> https://ghc.haskell.org/trac/ghc/ticket/10431
>

I've linked my ticket to that one.


AntC

> 
> > On Apr 30, 2017, at 3:35 PM, Brandon Allbery wrote: 
>>> On Sun, Apr 30, 2017 at 3:31 PM, Richard Eisenberg
wrote:
>>>> On Apr 30, 2017, at 6:37 AM, Anthony Clayden wrote:

>>>> Is that behaviour officially documented somewhere? 

>>> Not that I can find. Documentation on functional
>>> dependencies is somewhat lacking. This may be 
>>> because fundeps has received little love of late. 

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: [was ghc-devs] Reasoning backwards with type families

2017-12-13 Thread Anthony Clayden
On Thu, 14 Dec 2017 at 3:19 PM, David Feuer <redir...@vodafone.co.nz> wrote:

> I still haven't really digested what you've written, but I wish to pick a
> nit (below)
>

Thanks David. Heh, heh. I think we might be agreeing about the phenomenon,
but picking different nits to 'blame'.


> On Nov 20, 2017 3:44 AM, "Anthony Clayden" <anthony_clay...@clear.net.nz>
> wrote:
>
> > On Thu Nov 16 01:31:55 UTC 2017, David Feuer wrote:
>
> ...
>
>
>
> > For (&&), the obvious things you'd want are ...
> >
> > There's nothing inherently impossible about this sort of
> reasoning.
>
> No-ish but. It relies on knowing kind `Bool` is closed.
> And GHC doesn't pay attention to that.
> So you need to bring type family `Not`
> into the reasoning; hence a SMT solver.
>
>
> I don't think this is entirely correct. The fact that Bool is closed does
> not seem relevant.  The key fact, I believe, is that (&&) is closed.
>

Hmm. I wonder what you think "closed" amounts to?

Equation 1`'False && b= 'False` is consistent with `[b ~ Foo]`
   (unless GHC were to reason about closed kinds)

Equation 2`'True  && b = b   ` is consistent with `[b ~ Foo]`

And so on.

The last equation `a  && a= a` is consistent with `[a ~ Foo]`.
Furthermore it's *inconsistent* with a putative backwards FunDep ` r a ->
b` on `'False && 'True ~ 'False`.
I think it would be better to omit that equation all together.

Then when your o.p. reasons:

>>> ... we can calculate (Not a || Not b) as 'True for each of these LHSes.

What will it calculate for (Not Foo)?

Asking GHC to reason like this about open type families smells much harder,
> but maybe my sense of smell is off.
>
> Hmm. Your o.p. said

>>> In order for the constraint (a && b) ~ 'True to hold, the type family
application *must have reduced* using one of its equations.

I think that's smelly logic: if you want to reason backwards, then you
can't make assumptions about what "must" have reduced if GHC were
reasoning forward. That is, unless you're expecting GHC to behave like
an SMT solver over closed kinds.

Remember that the logic for selecting Closed Type Family equations
works from top to bottom *ignoring anything known about the result*.
So not only must it have reduced using one of the equations; it must
have rejected equations above the one it selected; and it must have
seen evidence for rejecting them. (It's more complicated than that in
practice: if there's coincident overlap, GHC will pick some equation
eagerly. And your `&&` equations exhibit coincident overlap, apart
from the last.)

If you want it to benefit from something known about the result, you
won't (in general) find the same top-to-bottom sequence helps with
type improvement. With the FunDep inconsistency in your last equation
for `&&`, I suspect that equation-selection will get 'stuck' looking
for evidence to reject earlier equations.

If we do expect GHC to behave like an SMT solver over closed kinds,
then it can reason just as well for an open type family; on the
proviso that it can see all the equations.

For a bit of history: during discussions around Injective Type
Families, one suggestion was to infer injectivity by examining the
equations given -- along the lines you're positing. That was rejected
on grounds the equations might exhibit injectivity 'by accident'. Also
that the programmer might intend injectivity, but their equations be
inconsistent. So it was decided there must be explicit declaration;
and the equations must be consistent with that declaration. No
equations for `&&` could be consistent that way.

AntC
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: [was ghc-devs] Reasoning backwards with type families

2017-12-13 Thread Anthony Clayden
On Tue, 12 Dec 2017 at 4:53 PM, Carter Schonwald <redir...@vodafone.co.nz>
wrote:

> This was / perhaps still is one goal of injective type families too!  I’m
> not sure why the current status is, but it’s defjnitely related
>

Thanks Carter, yes this sort of injectiviy (semi-injectivity? partial
injectivity?) is noted as future work in the 2015 paper. But I'm not seeing
a lot of hollerin' for it(?) Or am I looking in the wrong places?

The classic example is for Nats in length-indexed vectors: if we know the
length of appending two vectors, and one of the arguments, infer the length
of the other. (Oleg provided a solution using FunDeps more than a decade
ago.) But GHC has special handling for type-level Nats (or rather Ints),
hence no need to extend injectivity.

Come to that, the original work that delivered Injective Type Families
failed to find many use cases -- so the motivation was more
keep-up-with-the-Jones's to provide equivalence to FunDeps.

There were lots of newbie mistakes when Type Families first arrived, of
thinking they were injective, because a TF application looks like a type
constructor application `F Int` cp `T Int`. But perhaps that
misunderstanding didn't represent genuine use cases?

Is anybody out there using Injective Type Families currently? What for?

AntC


> On Mon, Nov 20, 2017 at 3:44 AM Anthony Clayden <
> anthony_clay...@clear.net.nz> wrote:
>
>> > On Thu Nov 16 01:31:55 UTC 2017, David Feuer wrote:
>>
>> (Moving to ghc-users, there's nothing particularly dev-y.)
>>
>> > Sometimes it woulld be useful to be able to reason
>> backwards
>> > about type families.
>>
>> Hi David, this is a well-known issue/bit of a sore.
>> Discussed much in the debate between type families
>> compared to FunDeps.
>>
>> > For example, we have
>> >
>> > type family a && b where
>> >   'False && b  = 'False
>> >   'True  && b  = b
>> >   a  && 'False = 'False
>> >   a  && 'True  = a
>> >   a  && a  = a
>>
>> > ... if we know something about the *result*,
>> > GHC doesn't give us any way to learn anything about the
>> arguments.
>>
>> You can achieve the improvement you want today.
>>
>> You'll probably find Oleg has a solution
>> With FunDeps and superclass constraints, it'll go like this
>>
>> class (OnResult r a b, OnResult r b a) => And a b r | a b ->
>> r
>>
>> instance And 'False b 'False
>> -- etc, as you'd expect following the tf equations
>> -- the instances are overlapping but confluent
>>
>> class OnResult r a b | r a -> b
>> instance OnResult 'True a 'True
>> instance OnResult 'False 'True 'False
>>
>> You could equally make `OnResult` a type family.
>>
>> If you can trace backwards to where `&&` is used,
>> you might be able to add suitable constraints there.
>>
>> So there's a couple of futures:
>> * typechecker plugins, using an SMT solver
>> * injective type families
>>the next phase is supposed to allow
>>
>> type family a && b = r | r a -> b, r b -> a where ...
>>
>> That will help with some type families
>> (such as addition of Nats),
>> plug1
>>
>> https://github.com/AntC2/ghc-proposals/blob/instance-apartness-guards/proposals/-instance-apartness-guards.rst#injective-type-families
>>
>> but I don't see it helping here.
>> plug2 (this example)
>>
>> https://github.com/AntC2/ghc-proposals/blob/instance-apartness-guards/proposals/-instance-apartness-guards.rst#type-family-coincident-overlap
>>
>> Because (for example) if you unify the first two equations,
>> (that is, looking for coincident overlap)
>>
>> 'False && 'False = 'False
>> 'True && 'False = 'False
>>
>> That's inconsistent on dependency ` r b -> a`.
>>
>> And you can't fix it by re-ordering the closed equations.
>>
>> > For (&&), the obvious things you'd want are ...
>> >
>> > There's nothing inherently impossible about this sort of
>> reasoning.
>>
>> No-ish but. It relies on knowing kind `Bool` is closed.
>> And GHC doesn't pay attention to that.
>> So you need to bring type family `Not`
>> into the reasoning; hence a SMT solver.
>>
>> > ...
>> > I wouldn't necessarily expect GHC
>> > to be able to work something like this out on its own.
>>
>> That's a relief ;-)
>>
>> > But it seems like there should be some way to allow the
>> user
>> > to guide it to a proof.
>>
>> Yes, an SMT solver with a model for kind `Bool`.
>> (And a lot of hard work to teach it, by someone.)
>>
>> AntC
>> ___
>> Glasgow-haskell-users mailing list
>> Glasgow-haskell-users@haskell.org
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users
>>
>
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: [was ghc-devs] Reasoning backwards with type families

2017-12-13 Thread Anthony Clayden
On Thu, 14 Dec 2017 at 1:55 PM, Clinton Mead <redir...@vodafone.co.nz>
wrote:

> Injective Type Families are at the core of my "Freelude" package, which
> allows many more types to be defined as Categories, Functors, Applicatives
> and Monads.
>

Cool!


> What would also be helpful is if injectivity of type C as mentioned on the
> page ...
>

OK. That's as per the type-level addition of Nats I mentioned. Did you
consider using FunDeps instead of Injective Type Families?

(I see lower down that page, type C is described as 'generalized'
injectivity.)

The variety of injectivity David F's o.p. talked about is orthogonal across
types A, B, C. We might call it 'partial injectivity' as in partial
function:
* some values of the result determine (some of) the arguments; and/or
* some values of the result with some values of some arguments determine
other arguments; but
* for some values of the result and/or some arguments, we can't determine
the other arguments.

You can kinda achieve that now using FunDeps with overlapping instances
with UndecidableInstances exploiting GHC's bogus consistency check on
FunDeps https://ghc.haskell.org/trac/ghc/ticket/10675#comment:15.

Or maybe with (Closed) Type Families if you put a bogus catch-all at the
end of the sequence of equations:

> type family F a where
>   ...
>   F a = F a

(But then it can't be injective, so you have to stitch it together with
type classes and superclass equality constraints and who-knows-what.)

None of it is sound or complete or rugged, in particular you can't risk
orphan instances -- unless plug3:
https://github.com/ghc-proposals/ghc-proposals/pull/56#issuecomment-351289722

AntC


> On Thu, Dec 14, 2017 at 11:29 AM, Anthony Clayden <
> anthony_clay...@clear.net.nz> wrote:
>
>>
>> On Tue, 12 Dec 2017 at 4:53 PM, Carter Schonwald <redir...@vodafone.co.nz>
>> wrote:
>>
>>> This was / perhaps still is one goal of injective type families too!
>>> I’m not sure why the current status is, but it’s defjnitely related
>>>
>>
>> Thanks Carter, yes this sort of injectiviy (semi-injectivity? partial
>> injectivity?) is noted as future work in the 2015 paper. But I'm not seeing
>> a lot of hollerin' for it(?) Or am I looking in the wrong places?
>>
>> The classic example is for Nats in length-indexed vectors: if we know the
>> length of appending two vectors, and one of the arguments, infer the length
>> of the other. (Oleg provided a solution using FunDeps more than a decade
>> ago.) But GHC has special handling for type-level Nats (or rather Ints),
>> hence no need to extend injectivity.
>>
>> Come to that, the original work that delivered Injective Type Families
>> failed to find many use cases -- so the motivation was more
>> keep-up-with-the-Jones's to provide equivalence to FunDeps.
>>
>> There were lots of newbie mistakes when Type Families first arrived, of
>> thinking they were injective, because a TF application looks like a type
>> constructor application `F Int` cp `T Int`. But perhaps that
>> misunderstanding didn't represent genuine use cases?
>>
>> Is anybody out there using Injective Type Families currently? What for?
>>
>> AntC
>>
>>
>>> On Mon, Nov 20, 2017 at 3:44 AM Anthony Clayden <
>>> anthony_clay...@clear.net.nz> wrote:
>>>
>>>> > On Thu Nov 16 01:31:55 UTC 2017, David Feuer wrote:
>>>>
>>>> (Moving to ghc-users, there's nothing particularly dev-y.)
>>>>
>>>> > Sometimes it woulld be useful to be able to reason
>>>> backwards
>>>> > about type families.
>>>>
>>>> Hi David, this is a well-known issue/bit of a sore.
>>>> Discussed much in the debate between type families
>>>> compared to FunDeps.
>>>>
>>>> > For example, we have
>>>> >
>>>> > type family a && b where
>>>> >   'False && b  = 'False
>>>> >   'True  && b  = b
>>>> >   a  && 'False = 'False
>>>> >   a  && 'True  = a
>>>> >   a  && a  = a
>>>>
>>>> > ... if we know something about the *result*,
>>>> > GHC doesn't give us any way to learn anything about the
>>>> arguments.
>>>>
>>>> You can achieve the improvement you want today.
>>>>
>>>> You'll probably find Oleg has a solution
>>>> With FunDeps and superclass constraints, it'll go like this
>>>>
>>>> class (OnResult r a b, OnResult

Re: [was ghc-devs] Reasoning backwards with type families

2017-12-14 Thread Anthony Clayden
On Thu, 14 Dec 2017 at 4:13 PM, Clinton Mead 
wrote:

>
> I've panicked GHC enough whilst developing Freelude so whilst I'm not sure
> exactly what you're saying I'd be hestiant about exploiting anything bogus
> (8.2 btw seems far more stable than 8.0 btw).
>

;-) Fair enough.

"bogus" is SPJ's way of saying: it works, but it isn't supported by deep
type theory. 'C'est brutal mais ca marche.'

And that particular exploit has been stable since 2004 at least: the HList
library totally relied on it until Closed Type Families arrived.

AntC
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: [was ghc-devs] Reasoning backwards with type families

2017-11-20 Thread Anthony Clayden
> On Thu Nov 16 01:31:55 UTC 2017, David Feuer wrote:

(Moving to ghc-users, there's nothing particularly dev-y.)

> Sometimes it woulld be useful to be able to reason
backwards
> about type families.

Hi David, this is a well-known issue/bit of a sore.
Discussed much in the debate between type families
compared to FunDeps.

> For example, we have
>
> type family a && b where
>   'False && b  = 'False
>   'True  && b  = b
>   a  && 'False = 'False
>   a  && 'True  = a
>   a  && a  = a

> ... if we know something about the *result*,
> GHC doesn't give us any way to learn anything about the
arguments.

You can achieve the improvement you want today.

You'll probably find Oleg has a solution
With FunDeps and superclass constraints, it'll go like this

class (OnResult r a b, OnResult r b a) => And a b r | a b ->
r

instance And 'False b 'False
-- etc, as you'd expect following the tf equations
-- the instances are overlapping but confluent

class OnResult r a b | r a -> b
instance OnResult 'True a 'True
instance OnResult 'False 'True 'False

You could equally make `OnResult` a type family.

If you can trace backwards to where `&&` is used,
you might be able to add suitable constraints there.

So there's a couple of futures:
* typechecker plugins, using an SMT solver
* injective type families
   the next phase is supposed to allow

type family a && b = r | r a -> b, r b -> a where ...

That will help with some type families
(such as addition of Nats),
plug1
https://github.com/AntC2/ghc-proposals/blob/instance-apartness-guards/proposals/-instance-apartness-guards.rst#injective-type-families

but I don't see it helping here.
plug2 (this example)
https://github.com/AntC2/ghc-proposals/blob/instance-apartness-guards/proposals/-instance-apartness-guards.rst#type-family-coincident-overlap

Because (for example) if you unify the first two equations,
(that is, looking for coincident overlap)

'False && 'False = 'False
'True && 'False = 'False

That's inconsistent on dependency ` r b -> a`.

And you can't fix it by re-ordering the closed equations.

> For (&&), the obvious things you'd want are ...
>
> There's nothing inherently impossible about this sort of
reasoning.

No-ish but. It relies on knowing kind `Bool` is closed.
And GHC doesn't pay attention to that.
So you need to bring type family `Not`
into the reasoning; hence a SMT solver.

> ...
> I wouldn't necessarily expect GHC
> to be able to work something like this out on its own.

That's a relief ;-)

> But it seems like there should be some way to allow the
user
> to guide it to a proof.

Yes, an SMT solver with a model for kind `Bool`.
(And a lot of hard work to teach it, by someone.)

AntC
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: Proposal: Instance apartness guards

2017-11-08 Thread Anthony Clayden
Surprisinlgy little comment on this proposal.
Perhaps it landed when yous were busy elsewhere.
I'd like to push it to the committee soon;
this is an invitation for more feedback.

Richard E wrote a nice brief summary
https://github.com/ghc-proposals/ghc-proposals/pull/56#issuecomment-311421457

Thank you
AntC


- Original Message Follows -
> After years of pondering this idea (in various forms), 
> and several rounds of discussion on several forums, 
> I've written it up.
> 
> "This proposal tackles the thorny topic of Overlapping
> instances,
>  for both type classes and Type Families/Associated types,
>  by annotating instance heads with type-level apartness
> Guards.
>  Type-level disequality predicates appear in Sulzmann &
> Stuckey 2002;
>  in the type-level ‘case selection’ in HList 2004;
>  and in various guises in Haskell cafe discussions in
> following years.
>  This proposal builds on the apartness testing implemented
>  as part of the Closed Type Families work."
> 
> All feedback welcome.
> 
>
https://github.com/AntC2/ghc-proposals/blob/instance-apartness-guards/proposals/-instance-apartness-guards.rst
> 
> AntC
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Scoped Type Variables discussion forum [was: open up the issues tracker on ghc-proposals]

2018-05-04 Thread Anthony Clayden
This thread is a discussion about discussions, not the discussion itself ;-)

I'm cc'ing to the cafe; but I'd prefer replies to come to
glasgow-haskell-users.


>> I can volunteer to at least scrape together all the objections to
ScopedTypeVariables as currently. It's not yet a proposal, so not on
github. Start a wiki page? A cafe thread? (It'll get lost.) A ghc-users
thread? (It'll get ignored.)

> ... don’t care what forums or list or whatever. As long as it’s collated
and such
> It could even be on the prime issue tracker for prime proposals. Just
that it’s written down :)

Thanks Carter, but I understand Haskell Prime to be to assess mature/stable
proposals (preferably already delivered as extensions). This discussion is
at first going to be more exploratory:
* likes and dislikes about ScopedTypeVariables as currently.
* confusions experienced by users (especially newbies)
-- although absolute newbies wouldn't be using it(?), so intermediates?
* feedback from those teaching Haskell.
* wild ideas for possible alternative designs.
* possible improvements to the current design.
* I think we're all agreed that ScopedTypeVariables should have been in
Haskell from the beginning;
but it wasn't, so now we have to worry about backwards compatibility for
programs that worked around the omission.
Or do we? What code would break? How much pain would that cause?
* anything else?

> We have lots of forums, but your point is that certain sorts of
discussions never get going with the right audience – you especially point
to “confused beginners”.
> ... It’s quite a challenge because beginners tend not to be vocal, and
yet they are a crucial set of Haskell users. Every Haskell user started as
a beginner.

On this particular topic, there's plenty of confused people asking
questions on StackOverflow. (Heads up: they're especially asking why they
need explicit `forall` whereas in reguar Haskell that 'intermediates' see,
the forall is implicit.)

Can other people point me to questions/likes/dislikes on other forums?
Reddit for example.

If you've read this far, you now understand what we're trying to cover.
It's going to be random/varied thoughts at first, then perhaps coalescing
to an approach or two. At that point a formal proposal on github proper;
and the random stuff might be interesting background but will essentially
get archived/thrown away.

I do agree with David's suggestion that github Issue tracker looks like a
suitable solution. We can write formatted code and text. We can add links
and references. What do others think? Joachim has opened up Issues tracker,
as a try-out. If using it doesn't work out, that's fine and in keeping with
my "thrown away" above.

Also where else should I post links to this message to 'advertise' the
thread? I don't reddit much, so if that's suitable, please someone post
there.

Thank you
AntC
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: Open up the issues tracker on ghc-proposals

2018-05-05 Thread Anthony Clayden
> On Th, 3 May 2018 at 13:53 UTC, Joachim Breitner wrote:
> I am worried about the signal-to-noise ratio for those poor committee
> members who have not given up on following the GitHub notifications for
> the ghc-proposals repository
>
>
Almost by definition, Issue-tracker traffic should *not* be going to
committee members, unless they deliberately opt in to some issue.

I don't get github traffic except for proposals I've opted in to; and I can
always 'mute the thread'. So specifically no traffic for the Issue David's
just raised [good]. I go and sweep github for interesting pickings every
now and then, and subscribe.

Looks like (from his puzzled reply) Simon did get traffic for that issue. I
dread to think what other github traffic Simon might have suffered recently.

Can a user configure to get proposal traffic but not Issue tracker traffic?
(There seem to be options for everything.) I suggest the committee members
do that.

AntC
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: Scoped Type Variables discussion forum [was: open up the issues tracker on ghc-proposals]

2018-05-19 Thread Anthony Clayden
On Wed, 9 May 2018 03:01 UTC, cheater00 wrote:

> I couldn't live without ScopedTypeVariables. For me it's an essential tool
when I want to figure out

Yes absolutely. To be clear: nobody's talking about removing it. The
question is, could we get the same functionality without being so
confusing and contrary to how Haskell usually works with type
variables? (I'm not saying yea or nay, I'm trying to invite
discussion.)

> ...

> Backwards compat: Isn't this what we have Haskell 98, Haskell 2010, etc?

The current design for ScopedTypeVariables potentially changes the
types for programs that are valid H98/H2010. (Or for programs that
were valid as at when ScopedTypeVariables was introduced ~10 years
ago, perhaps using ExplicitForAll.) Probably the effect would be that
they don't compile, but it might be more subtle.


So the explanation I've seen for the current design is it was
deliberately idiosyncratic, to minimise any disruption to existing
code. Then I'm asking whether any of that code is still around? If
not/if it's been re-factored to use ScopedTypeVariables, then any
tweak to the design could have a freer hand.


But seems there's no enthusiasm for such discussion, so I'm going to
let it die. I hear there might be moves afoot elsewhere ...


AntC
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: Scoped Type Variables discussion forum [was: open up the issues tracker on ghc-proposals]

2018-05-20 Thread Anthony Clayden
On Mon, 21 May 2018 at 11:23 AM, Carter Schonwald <redir...@vodafone.co.nz>
wrote:

> indeed .. and we can reasonably say "lets deal with the bandaid in one go
> by cleaning it up  in the next standard"
>

Thanks Carter/Brandon, the reason for asking how we should go about the
discussion was exactly: where/how are we going to maximise the chance of
finding out what code is out there, and how disruptive any 'clean up' might
be?

Ghc has occasionally made breaking releases (not saying that's what we want
to do), usually with some advance warning/deprecation period. And generally
the Haskell community has accommodated that with understanding of the
decision, to fix their code.



> so what would the next gen look like?
>
> eg: fresh variables get the usual implicit forall at the front of the
> type, and everything else either needs an explicit quantifier OR it refers
> to the outer implicit quantified variable?
>

I wanted to avoid discussing 'next gen' in possibly-obscure/write-only
mailing lists; and preferably get the discussion where posterity can see
the reasoning/examination of options.

"fresh variables get the usual implicit forall" is what happens now. (It's
just that the User Guide explains it really badly -- I'm trying to fix that
separately https://ghc.haskell.org/trac/ghc/ticket/15146 .) If the outer
variable(s) are not explicitly forall-quantified, then even same-named
variables count as fresh. IOW merely putting a `forall` can change the
meaning of a program -- that's what causes the most confusion (judging by
SO).

The exception is variables bound in a pattern signature for an
existentially-quantified data constructor: they *must* be fresh. This is
hard for a reader to follow because the pattern signature with data
constructor looks the same whether or not the constructor is
existentially-quantified. What's worse a constructor might have a mix of
existential and universal variables.

AntC


> On Sat, May 19, 2018 at 2:56 PM, Brandon Allbery <allber...@gmail.com>
> wrote:
>
>> On Sat, May 19, 2018 at 7:32 AM, Anthony Clayden <
>> anthony_clay...@clear.net.nz> wrote:
>>
>>> So the explanation I've seen for the current design is it was deliberately 
>>> idiosyncratic, to minimise any disruption to existing code. Then I'm asking 
>>> whether any of that code is still around? If not/if it's been re-factored 
>>> to use ScopedTypeVariables, then any tweak to the design could have a freer 
>>> hand.
>>>
>>>
>> The reason there's no discussion about that is that nobody here has the
>> ability to go hunt down every last piece of code in every public or private
>> (think Standard Chartered, Facebook, etc.) code base and its current owner,
>> and order them to "fix" it. You can't win that battle.
>>
>> --
>> brandon s allbery kf8nh   sine nomine
>> associates
>> allber...@gmail.com
>> ballb...@sinenomine.net
>> unix, openafs, kerberos, infrastructure, xmonad
>> http://sinenomine.net
>>
>
>> ___
>> Glasgow-haskell-users mailing list
>> Glasgow-haskell-users@haskell.org
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users
>>
>>
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: Open up the issues tracker on ghc-proposals

2018-05-01 Thread Anthony Clayden
> On May 1, 2018, at 2:24 PM, David Feuer  wrote:
> 
> Sometimes, a language extension idea could benefit from
some community discussion before it's ready for a formal
proposal.

Can I point out it's not only ghc developers who make
proposals. I'd rather you post this idea more widely.

As a datapoint, I found ghc-users and the café just fine
for those discussions. 
Ghc-users seems to have very low traffic/is rather wasted
currently.
And I believe a lot of people pre-discuss on reddit. 
For ideas that have been on the back burner for a long time,
there's often wiki pages. (For example re Quantified
Constraints.)

> I'd like to propose that we open up the GitHub issues
tracker for ghc-proposals to serve as a place to discuss
pre-proposal ideas. Once those discussions converge on one
or a few specific plans, someone can write a proper
proposal.

I'm not against that. There gets to be a lot of cruft on
some discussions about proposals, so I'd expect we could
archive it all once a proposal is more formalised.

AntC

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: Open up the issues tracker on ghc-proposals

2018-05-02 Thread Anthony Clayden
On Wed, 2 May 2018 at 8:28 PM, Simon Peyton Jones <redir...@vodafone.co.nz>
wrote:

> |  > Sometimes, a language extension idea could benefit from
> |  some community discussion before it's ready for a formal proposal.
> |
> |  Can I point out it's not only ghc developers who make proposals.

| I'd rather you post this idea more widely.
>

(I meant for David to post more widely the idea of using Github issues
tracker. Because I suspect the people who would most benefit from the
'community discussion' are not participants on ghc-devs.)


> The Right Thing is surely for the main GHC proposals pav[g]e
> https://github.com/ghc-proposals/ghc-proposals
> to describe how you can up a "pre-proposal".  That is, document
> the entire process in one, easy to find, place.
>
> Mind you, I'm unclear about the distinction between a pre-proposal
> and a proposal. ...


Thanks Simon,

Speaking as a non-developer of ghc, often there's a bright idea with no
very clear notion how best it fits into Haskell, or could be implemented
effectively/efficiently:

* maybe it's something seen in another language;
* maybe the proposer finds themself writing the same boilerplate
repeatedly, and wonders if that's a common idiom the language could capture;
* sometimes it starts as more of a 'how do I do this?' question; then you
get told you can't; then other people chip in with 'yes I'd like to do that
too'.
* sometimes it's more of a niggle: this really annoys me/is awkward/is
confusing every time I bump into it/even though I can work round it.


 Both are drafts that invite community discussion,
> prior to submitting to the committee for decision.
>

I'm guessing as to why David raised the question. I've noticed (a minority
of) proposals generate a huge amount of discussion, a lot of which is: you
can already do that, or nearly all of that, or there's good reasons why
ghc/Haskell shouldn't do that. Then maybe the difficulty that needs
tackling is that the submitter isn't really following the process/perhaps
the process document should be clearer about what threshold of readiness
the ideas should be in before formalising(?) I'll try to avoid specifics
here, but two proposals I can think of essentially amounted to: Language
XXX has YYY; language XXX is similar to Haskell; I think YYY is great;
please put YYY in Haskell; P.S. I don't really understand ghc and all the
extensions it now offers.

As you've remarked yourself, sometimes the 'community discussion' gets so
convoluted and sidetracked it's impossible to make out where the proposal
is at, and whether all objections have been addressed. That's the point at
which IMO the proposal should be withdrawn and resubmitted as a 'fresh
start'.

OTOH, as I said, there's plenty of other forums those less
formal/pre-proposal discussions could happen. Some used to happen on
Trac/started life as bug reports -- which is rightfully discouraged.
_Could_ happen but often doesn't raise a response. What if Github issues
tracker just becomes another backwater where ideas go to get ignored?


AntC


>
> |  -Original Message-
> |  From: Glasgow-haskell-users  |  boun...@haskell.org> On Behalf Of Anthony Clayden
> |  Sent: 02 May 2018 02:34
> |  To: glasgow-haskell-users@haskell.org; ghc-d...@haskell.org
> |  Subject: Re: Open up the issues tracker on ghc-proposals
> |
> |  > On May 1, 2018, at 2:24 PM, David Feuer  |  gmail.com> wrote:
> |  >
> |  > Sometimes, a language extension idea could benefit from
> |  some community discussion before it's ready for a formal proposal.
> |
> |  Can I point out it's not only ghc developers who make proposals. I'd
> |  rather you post this idea more widely.
> |
> |  As a datapoint, I found ghc-users and the café just fine for those
> |  discussions.
> |  Ghc-users seems to have very low traffic/is rather wasted currently.
> |  And I believe a lot of people pre-discuss on reddit.
> |  For ideas that have been on the back burner for a long time, there's
> |  often wiki pages. (For example re Quantified
> |  Constraints.)
> |
> |  > I'd like to propose that we open up the GitHub issues
> |  tracker for ghc-proposals to serve as a place to discuss pre-proposal
> |  ideas. Once those discussions converge on one or a few specific plans,
> |  someone can write a proper proposal.
> |
> |  I'm not against that. There gets to be a lot of cruft on some
> |  discussions about proposals, so I'd expect we could archive it all
> |  once a proposal is more formalised.
> |
>
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: Open up the issues tracker on ghc-proposals

2018-05-02 Thread Anthony Clayden
On Th, 3 May 2018 at 13:53 UTC, Joachim Breitner wrote:
> I am worried about the signal-to-noise ratio for those poor committee
members ...

Thanks Joachim, Yes that's exactly the worry. So please tell the rest of us
how to best use your collective time.

First help yourselves/get your own shit together:
there's now a long discussion on the committee mailing list about the
specifics of #99. There are good questions, good answers, good ideas. None
of the rest of use can contribute to that. The committee list is supposed
to be low volume/decision making only. WTF?

(That seems to be triggered by one particular committee member who
seldom/never looks at github, and prefers email discussion. Yous others
could perhaps coach him?)

> hmm, some of that sounds like it would be better suited for haskell-cafe,
StackOverflow, ...

My point about "sometimes it's more of a niggle" was aimed at exactly your
(Joachim's) series of proposals 'Resurrect Pattern Signatures'. The
motivation is it helps "confused beginners". But those beginners won't be
providing feedback on github. Instead you've got feedback from experienced
users who've all said they see no point in the proposal. So the discussion
has gone round and round and spun off other proposals. That whole series of
discussions would be better happening somewhere else: where?

David's quite correct
>> Haskell-cafe might work, but it's a bit tricky to pull up all the
language extension ideas discussed there.

My impression is not many people who could help refine a pre-proposal ever
take part in the cafe.

Stackoverflow likewise. (I did raise a 'how do I do this?' type question
there. It was David who responded, thank you. But I ended up answering it
myself; and it turned out there was already a proposal on the slate.)

>> My limited experience with glasgow-haskell-users is that it's where
threads go to die.

(I did try to continue one of David's threads there a few months ago.) But
yes, my experience too. And that's sad because it's a wasted resource. I'm
grateful to Simon for noticing this thread; but most topics I've raised on
ghc-users have gone nowhere. So then I've tried pursuing them by poaching
on Trac or github -- which is an abuse, I know.

> Most vague ideas get better when the proposer is nudged to sit down and write
it up properly! (And some get dropped in the process, which is also good
:-)).

Yes exactly what I'm trying to get to happen. How/where?

Here's a specific example: there's talk of baking ScopedTypeVariables into
the H2020 standard. There's also people unhappy with ScopedTypeVariables as
currently (I'm one, but I don't know if my reservations are the same as
others'). If we don't have an alternative proposal (and preferably an
experimental extension) by 2020, the committee can only go with the as
currently fait accompli or continue the H2010 status quo.

I can volunteer to at least scrape together all the objections to
ScopedTypeVariables as currently. It's not yet a proposal, so not on
github. Start a wiki page? A cafe thread? (It'll get lost.) A ghc-users
thread? (It'll get ignored.)


AntC
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Quantified Constraints and Injectivity

2018-02-11 Thread Anthony Clayden
I’m looking at the flurry of activity around Quantified Constraints [1, 2,
3], suggestion of uses for implication logic [4], and comments in the
hs2017 paper “raise the expressive power of type classes … to first-order
logic. … more expressive modelling …”[Abstract].


Consider modelling the logic for somewhat-injective type functions. Take
type-level Boolean `And` [5].


`And` is not fully injective but:

* If the result is True, the two arguments must be True.

* If the result is False and one argument is True, the other must be False.


class ( b3 ~ True => (b1 ~ True, b2 ~ True),

   (b3 ~ False, b1 ~ True) => b2 ~ False )-- etc

  => And (b1 :: Bool) (b2 :: Bool) (b3 :: Bool)  | b1 b2 -> b3
  where …


None of this is relying on `Bool` being a closed type AFAICT. (That’s the
usual downfall of trying to capture application logic in constraints.)



Come to that, can we capture the logic of the FunDep? (The hs2017 paper
hints we might.)


class ( {- as before -},

   ( forall b3’. And b1 b2 b3’ => b3’ ~ b3 )   )

  => And (b1 :: Bool) (b2 :: Bool) (b3 :: Bool)  | b1 b2 -> b3
  where …


This might come closer to the logic of FunDeps as originally specified by
Mark P Jones, which ghc doesn’t quite reach to [6].


OTOH we now have a cyclical superclass constraint, which ain’t allowed.




[1] https://github.com/ghc-proposals/ghc-proposals/pull/109

[2] https://ghc.haskell.org/trac/ghc/wiki/QuantifiedConstraints

[3] https://ghc.haskell.org/trac/ghc/ticket/2893

[4] https://mail.haskell.org/pipermail/libraries/2017-December/028377.html

[5] https://mail.haskell.org/pipermail/ghc-devs/2017-November/015073.html,
continued at


https://mail.haskell.org/pipermail/glasgow-haskell-users/2017-November/026650.html

[6] https://ghc.haskell.org/trac/ghc/ticket/10675


AntC
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: Equality constraints (~): type-theory behind them

2018-12-22 Thread Anthony Clayden
On Mon, 10 Dec 2018 at 6:12 PM, Anthony Clayden <
anthony_clay...@clear.net.nz> wrote:

> On Mon, 10 Dec 2018 at 9:36 PM, Tom Schrijvers wrote:
>
>> Maybe our Haskell'17 paper about Elaboration on Functional Dependencies
>> sheds some more light on your problem:
>>
>> https://people.cs.kuleuven.be/~tom.schrijvers/Research/papers/haskell2017a.pdf
>>
>>
> Thank you Tom, looks interesting and very applicable. It'll be a few days
> before I can take a proper look.
>

So the paper's main motivation is wrt Trac #9670. I've added some comments
there, including a link to your paper. In particular, both GHC and Hugs
will infer the 'right' type -- i.e. as improved from the FunDep. But
neither allows you to use that improvement to write the desired function
`f`. It's a phasing of inference thing(?)

I feel I'm not getting much light shed. No amount of adding `(~)` in GHC
nor `TypeCast`+`typeCast` method in Hugs will get #9670 function `f` to
compile. So my 'specific example' earlier in this thread shows `(~)` is
moderately eager/more eager than FunDeps alone, but not eager enough for
#9670.

IOW from Adam's comment about "inferring where `typeCast` needs to be
placed", it seems there's no such place. Perhaps because class `C` has no
methods(?)

Yes, as per your+Georgios' paper, replacing the FunDep with an Assoc Type
and superclass `(~)` constraint is eager enough. That's not telling me why
the FunDep+`(~)` constraint on the instances or function signatures isn't
so eager.

>From the #9670 comment by spj: it's historically to do with an inability to
represent evidence under System FC. It seems GHC is taking absence of
evidence as evidence of absence, even though it's able to infer the 'right'
type.

AntC

On Fri, 7 Dec 2018 at 10:57 PM, Adam Gundry wrote:
>>>
>>>> ...
>>>
>>> This is rather like automatically inferring where `typeCast` needs to be
>>>> placed (indeed, at the Core level there is a construct for casting by an
>>>> equality proof, which plays much the same role).
>>>>
>>>>
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: Equality constraints (~): type-theory behind them

2018-12-22 Thread Anthony Clayden
On Sat, 22 Dec 2018 at 8:01 PM, Oliver Charles wrote:

On Sat, Dec 22, 2018 at 11:08 AM Anthony Clayden
>  wrote:
>
> > So the paper's main motivation is wrt Trac #9670.
>
> Are you sure you mean #9670?
>
Oops. Thanks for the catch Oliver. That should be #9627. (One of the
tickets to do with FunDeps and type improvement ends in 70 ;-)

AntC
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: Equality constraints (~): type-theory behind them

2018-12-22 Thread Anthony Clayden
On Sat, 22 Dec 2018 at 7:08 PM, Anthony Clayden <
anthony_clay...@clear.net.nz> wrote:

>
> On Mon, 10 Dec 2018 at 6:12 PM, Anthony Clayden <
> anthony_clay...@clear.net.nz> wrote:
>
>> On Mon, 10 Dec 2018 at 9:36 PM, Tom Schrijvers wrote:
>>
>>> Maybe our Haskell'17 paper about Elaboration on Functional Dependencies
>>> sheds some more light on your problem:
>>>
>>> https://people.cs.kuleuven.be/~tom.schrijvers/Research/papers/haskell2017a.pdf
>>>
>>>
>>
> I feel I'm not getting much light shed. No amount of adding `(~)` in GHC
> nor `TypeCast`+`typeCast` method in Hugs will get #9627 function `f` to
> compile. So my 'specific example' earlier in this thread shows `(~)` is
> moderately eager/more eager than FunDeps alone, but not eager enough for
> #9627.
>


Can anybody explain @yav's comment here
https://github.com/ghc-proposals/ghc-proposals/pull/158#issuecomment-449590613
"currently equality constraints have no run-time evidence associated with
them".

Haskell has type-erasure semantics. Why would I need any _run-time_
evidence for anything to do with type improvement? What impact would the
lack of evidence have at run-time?

Does this mean `(~)` constraints don't produce evidence at compile time?
Which is also the difficulty for type improvement under FunDeps.

(I tried a type family to achieve the same effect as `(~)` with an
injective TF. No improvement in improvement.)


AntC

>

>
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: Equality constraints (~): type-theory behind them

2018-12-07 Thread Anthony Clayden
On Fri, 7 Dec 2018 at 10:57 PM, Adam Gundry wrote:

>
> Regarding inference, the place that comes to mind is Vytiniotis et al.
> OutsideIn(X):
>

Thanks Adam for the references. Hmm That OutsideIn paper is formidable in
so many senses ...

I'm not sure I grok your response on my specific q; to adapt your example
slightly, without a signature

foo2 x@(_: _) = x

GHC infers `foo2 :: [a] -> [a]`. That's not general enough. So I give a
signature with bare `b` as the return type, using the (~) to improve it:

foo2 :: [a] ~ b => [a] -> b

... and GHC infers `foo2 :: [a] -> [a]`. Oh. Is that what I should expect?

I'm chiefly concerned about improving bare tyvars under a FunDep. So to
take the time-worn example, I want instances morally equivalent to

data TTrue = TTrue;  data TFalse = TFalse

class TEq a b r  | a b -> r  where tEq :: a -> b -> r

instance TEq a a TTrue   where tEq _ _ = TTrue
instance TEq a b TFalse  where tEq _ _ = TFalse

The FunDeps via CHRs 2006 paper tells me that first instance is supposed to
be as if

instance (r ~ TTrue) => TEq a a r  where ...

but it isn't

tEq 'a' 'a' :: TFalse  -- returns TFalse, using the first pair of
instances
tEq 'a' 'a' :: TFalse  -- rejected, using the (~) instance:
'Couldn't match type TFalse with TTrue ...'

The 'happily' returning the 'wrong' answer in the first case has caused
consternation amongst beginners, and a few Trac tickets.

So what magic is happening with (~) that it can eagerly improve `foo2` but
semi-lazily hold back for `tEq`?

Of course I lied about the first-given two instances for TEq: instances are
not consistent with Functional Dependency. So I need to resort to overlaps
and a (~) and exploit GHC's bogus consistency check:

instance {-# OVERLAPPABLE #-} (r ~ TFalse) => TEq a b r  where ...

> ... like automatically inferring where `typeCast` needs to be placed

Yes the non-automatic `TypeCast` version

instance (TypeCast TTrue r) => TEq a a r  where
  tEq _ _ = TTrue -- rejected 'Couldn't match expected type 'r'
with actual type 'TTrue' ...'

  tEq _ _ = typeCast TTrue-- accepted

Even though `typeCast` is really `id` (after it's jumped through the
devious indirection hoops I talked about).

So what I'm trying to understand is not just "where" to place `typeCast`
but also "when" in inference it unmasks the unification.


AntC


> About your specific question, if a wanted constraint `a ~ b` can be
> solved by unifying `a` with `b`, then GHC is free to do so (in a sense,
> either parameter of (~) can be improved from the other). The real
> difference with `TypeCast` arises when local equality assumptions (given
> constraints) are involved, because then there may be wanted constraints
> that cannot be solved by unification but can be solved by exploiting the
> local assumptions.
>
> For example, consider this function:
>
> foo :: forall a b . a ~ b => [a] -> [b]
> foo x = x
>
> When checking the RHS of `foo`, a wanted constraint `[a] ~ [b]` arises
> (because `x` has type `[a]` and is used in a place where `[b]` is
> expected). Since `a` and `b` are universally quantified variables, this
> cannot be solved by unification. However, the local assumption (given
> constraint) `a ~ b` can be used to solve this wanted.
>
> This is rather like automatically inferring where `typeCast` needs to be
> placed (indeed, at the Core level there is a construct for casting by an
> equality proof, which plays much the same role).
>
> Hope this helps,
>
> Adam
>
>
> On 06/12/2018 12:01, Anthony Clayden wrote:
> > The (~) constraint syntax is enabled by either `GADTs` or `TypeFamilies`
> > language extension.
> >
> > GADTs/Assoc Data Types/Assoc Type Synonyms/Type Families arrived in a
> > series of papers 2005 to 2008, and IIRC the development wasn't finished
> > in full in GHC until after that. (Superclass constraints took up to
> > about 2010.)
> >
> > Suppose I wanted just the (~) parts of those implementations. Which
> > would be the best place to look? (The Users Guide doesn't give a
> > reference.) ICFP 2008 'Type Checking with Open Type Functions' shows
> > uses of (~) in user-written code, but doesn't explain it or motivate it
> > as a separate feature.
> >
> > My specific question is: long before (~), it was possible to write a
> > TypeCast class, with bidirectional FunDeps to improve each type
> > parameter from the other. But for the compiler to see the type
> > improvement at term level, typically you also need a typeCast method and
> > to explicitly wrap a term in it. If you just wrote a term of type `a` in
> > a place where `b` was wanted, the compiler would either fail to see your
> > `T

Equality constraints (~): type-theory behind them

2018-12-06 Thread Anthony Clayden
The (~) constraint syntax is enabled by either `GADTs` or `TypeFamilies`
language extension.

GADTs/Assoc Data Types/Assoc Type Synonyms/Type Families arrived in a
series of papers 2005 to 2008, and IIRC the development wasn't finished in
full in GHC until after that. (Superclass constraints took up to about
2010.)

Suppose I wanted just the (~) parts of those implementations. Which would
be the best place to look? (The Users Guide doesn't give a reference.) ICFP
2008 'Type Checking with Open Type Functions' shows uses of (~) in
user-written code, but doesn't explain it or motivate it as a separate
feature.

My specific question is: long before (~), it was possible to write a
TypeCast class, with bidirectional FunDeps to improve each type parameter
from the other. But for the compiler to see the type improvement at term
level, typically you also need a typeCast method and to explicitly wrap a
term in it. If you just wrote a term of type `a` in a place where `b` was
wanted, the compiler would either fail to see your `TypeCast a b`, or unify
the two too eagerly, then infer an overall type that wasn't general enough.

(For that reason, the instance for TypeCast goes through some
devious/indirect other classes/instances or exploits separate compilation,
to hide what's going on from the compiler's enthusiasm.)

But (~) does some sort of magic: it's a kinda typeclass but without a
method. How does it mesh with type improvement in the goldilocks zone:
neither too eager nor too lazy?


AntC
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: Equality constraints (~): type-theory behind them

2018-12-10 Thread Anthony Clayden
On Mon, 10 Dec 2018 at 9:36 PM, Tom Schrijvers wrote:

> Maybe our Haskell'17 paper about Elaboration on Functional Dependencies
> sheds some more light on your problem:
>
> https://people.cs.kuleuven.be/~tom.schrijvers/Research/papers/haskell2017a.pdf
>
>
Thank you Tom, looks interesting and very applicable. It'll be a few days
before I can take a proper look.

I see you compare GHC vs Hugs: good, they differ significantly in the
details of implementation. I found Hugs well-principled, whereas GHC is far
too sloppy in what instances it accepts (but then you find they're
unusable).

Maybe I scanned your paper too quickly, but it looks like you insist on
non-overlapping instances (or that if instances overlap, they are
'confluent', in the terminology of the 2005~2008 work; or 'coincident' in
Richard E's work). And the 2006 CHRs paper takes the same decision. That's
a big disappointment: I think the combo of Overlap+FunDeps makes sense (you
need to use (~) constraints -- hence this thread), and there's plenty of
code using the combo -- including the HList 2004 paper (with `TypeCast`).

You can make that combo work with GHC, as did HList. But it's hazardous
because of GHC's sloppiness/bogusness. You can't make that combo work with
Hugs as released; but because it's a better principled platform, a
limited-scope tweak to the compiler makes it work beautifully. Documented
here: https://ghc.haskell.org/trac/ghc/ticket/15632#comment:2


AntC



> On Sat, Dec 8, 2018 at 6:42 AM Anthony Clayden <
> anthony_clay...@clear.net.nz> wrote:
>
>> On Fri, 7 Dec 2018 at 10:57 PM, Adam Gundry wrote:
>>
>>>
>>> Regarding inference, the place that comes to mind is Vytiniotis et al.
>>> OutsideIn(X):
>>>
>>
>> Thanks Adam for the references. Hmm That OutsideIn paper is formidable in
>> so many senses ...
>>
>> I'm not sure I grok your response on my specific q; to adapt your example
>> slightly, without a signature
>>
>> foo2 x@(_: _) = x
>>
>> GHC infers `foo2 :: [a] -> [a]`. That's not general enough. So I give a
>> signature with bare `b` as the return type, using the (~) to improve it:
>>
>> foo2 :: [a] ~ b => [a] -> b
>>
>> ... and GHC infers `foo2 :: [a] -> [a]`. Oh. Is that what I should expect?
>>
>> I'm chiefly concerned about improving bare tyvars under a FunDep. So to
>> take the time-worn example, I want instances morally equivalent to
>>
>> data TTrue = TTrue;  data TFalse = TFalse
>>
>> class TEq a b r  | a b -> r  where tEq :: a -> b -> r
>>
>> instance TEq a a TTrue   where tEq _ _ = TTrue
>> instance TEq a b TFalse  where tEq _ _ = TFalse
>>
>> The FunDeps via CHRs 2006 paper tells me that first instance is supposed
>> to be as if
>>
>> instance (r ~ TTrue) => TEq a a r  where ...
>>
>> but it isn't
>>
>> tEq 'a' 'a' :: TFalse  -- returns TFalse, using the first pair of
>> instances
>> tEq 'a' 'a' :: TFalse  -- rejected, using the (~) instance:
>> 'Couldn't match type TFalse with TTrue ...'
>>
>> The 'happily' returning the 'wrong' answer in the first case has caused
>> consternation amongst beginners, and a few Trac tickets.
>>
>> So what magic is happening with (~) that it can eagerly improve `foo2`
>> but semi-lazily hold back for `tEq`?
>>
>> Of course I lied about the first-given two instances for TEq: instances
>> are not consistent with Functional Dependency. So I need to resort to
>> overlaps and a (~) and exploit GHC's bogus consistency check:
>>
>> instance {-# OVERLAPPABLE #-} (r ~ TFalse) => TEq a b r  where ...
>>
>> > ... like automatically inferring where `typeCast` needs to be placed
>>
>> Yes the non-automatic `TypeCast` version
>>
>> instance (TypeCast TTrue r) => TEq a a r  where
>>   tEq _ _ = TTrue -- rejected 'Couldn't match expected type
>> 'r' with actual type 'TTrue' ...'
>>
>>   tEq _ _ = typeCast TTrue-- accepted
>>
>> Even though `typeCast` is really `id` (after it's jumped through the
>> devious indirection hoops I talked about).
>>
>> So what I'm trying to understand is not just "where" to place `typeCast`
>> but also "when" in inference it unmasks the unification.
>>
>>
>> AntC
>>
>>
>>> About your specific question, if a wanted constraint `a ~ b` can be
>>> solved by unifying `a` with `b`, then GHC is free to do so (in a sense,
>>> either parameter of (~) can be improved from the other). The real
>>> difference with `Ty

Re: Equality constraints (~): type-theory behind them

2019-03-25 Thread Anthony Clayden
lers do this; almost all use an untyped
> intermediate language.  With an untyped IR, all you need do is typecheck
> the source program, and then you are done with typechecking --- provided
> that all the subsequent transformations and optimisations are sound.
>
>
>
> But with a statically typed IR, we have to ensure that every
> transformation produces a program that is itself well-typed; and that in
> turn pretty much rules out complicated inference algorithms, because they
> are fragile to transformation.  Instead, GHC uses a complicated inference
> algorithm on the (implicitly typed) *source* program, but under the
> straitjacket restriction that the type inference algorithm must *also*
> translate the program into an *explicitly*-typed IR.
>
>
>
> This is a choice that GHC makes.  It’s a choice that I love, and one that
> makes GHC distinctive.  But I accept that it comes with a cost.
>
>
>
> There may be an IR based on fundeps, or CHRs or something, that has
> similar properties to FC.   It would be a great research goal to work on
> such a thing.
>
>
>
> That said,  there may be aspects of GHC’s implementation fundeps that are
> unsatisfactory or argualbly just plain wrong, and which could be fixed and
> *still* translate into FC.   If anyone would like to work on that, I’d be
> happy to help explain how the current machinery works.
>
>
>
> Simon
>
>
>
> *From:* Glasgow-haskell-users  *On
> Behalf Of *Anthony Clayden
> *Sent:* 25 March 2019 11:50
> *To:* GHC Users List ; Tom Schrijvers <
> tom.schrijv...@cs.kuleuven.be>
> *Subject:* Re: Equality constraints (~): type-theory behind them
>
>
>
>
>
> > On Mon, Dec 10, 2018 at 8:36:42 AM Tom Schrijvers wrote:
>
> > Maybe our Haskell'17 paper about Elaboration on Functional Dependencies
> sheds some more light on your problem:
>
>
>
> Thanks Tom, I've also been working through the 2011 expanded version of 
> 'System
> F with Type Equality Coercions' that Adam suggested.
>
>
>
> I'm finding your 2017 paper's proposals not up to the job, because it
> doesn't consider FunDeps for Overlapping Instances; and unnecessary because
> the examples it's addressing can be fixed purely using FunDeps, with their
> semantics per the 2006 'via CHRs' paper. The chief problems with using
> FunDeps in GHC code is GHC doesn't follow that semantics; neither does it
> follow any other well-principled/documented semantics.You can get it to
> accept a bunch of instances then find that taken together they're not
> consistent and confluent, in the sense of the 'via CHRs' paper.
>
>
>
> Specifically, re the 2017's paper Challenge 1: Enforcing (essentially Trac
> #10675). That set of instances in the example is not valid by the 'via
> CHRs' rules, and should be rejected. Hugs indeed rejects that code. GHC
> accepts it and #10675 shows what sort of incoherence can result. I'm not
> seeing we need to translate to Type Families/System FC to explain it.
>
>
>
> Re Challenge 2: Elaborating (which is Trac #9627 that gives the main
> example for the paper). We can fix that code with an extra `~` constraint:
>
>
>
> class C a b | a -> b
>
> instance C Int Bool
>
>
>
> f :: (C Int b, b ~ Bool) => b -> Bool
>
> f x  = x
>
> (Or in Hugs with a `TypeCast` instead of the `~`.)
>
>
>
> Is that extra constraint onerous? The signature already has `Bool` hard-coded 
> as the return type, so I don't see it's any more onerous.
>
> Or put that signature as
>
> f :: (C Int b) => b -> b
>
> Which also has the effect `b` must be `Bool`.
>
>
>
> I'd far rather see GHC's implementation of FunDeps made more coherent (and 
> learning from Hugs) than squeezing it into the straitjacket of System FC and 
> thereby lose expressivity.
>
>
>
> AntC
>
>
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: props for Hugs over System FC

2019-03-24 Thread Anthony Clayden
On Mon, Mar 25, 2019 at 6:24 PM Anthony Clayden <
anthony_clay...@clear.net.nz> wrote:

>...
>

Errk. pasted the wrong example. The code that works is

class F a b | a -> b

instance F Int Bool


class D a where { op :: (F a b) => a -> b }

instance (TypeCast Bool b') => D Int where { op _ = typeCast True }


That dangling `b'` in the constraint is weird. Anyhow: compiles in
Hugs, doesn't in GHC. Unless someone here can persuade it?


AntC
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


props for Hugs over System FC

2019-03-24 Thread Anthony Clayden
heh heh. I've got to record this for posterity. I've never known an example
before.

Some code that compiles in Hugs and works fine; but GHC (8.6.4) can't
typecheck so rejects.

It's an example in the 2011 'System F with Type Equality Coercions',
section 2.3 discussing FunDeps; and used to justify the extra power of type
inference in Type Families as opposed to FunDeps. Full details discussed
here: https://gitlab.haskell.org/ghc/ghc/issues/16430#note_189393

class F a b | a -> binstance F Int Boolclass D a where {
op :: F a b => a -> b }instance D Int where { op _ = True }

True that doesn't compile as given. Hugs says: 'Inferred type is not
general enough'.

GHC says 'Couldn't match expected type `b' with actual type `Bool''/
'`b' is a rigid type variable'. (So essentially the same failure of
typechecking.)

With a little help for the type inference, this compiles in Hugs.

class C a b | a -> binstance C Int Boolf :: (C Int b,
TypeCast b Bool) => b -> Boolf x  = typeCast x

With `TypeCast` defined as for HList.

But GHC still rejects it; and rejects a version with a `(~)`
constraint instead of the `TypeCast`.


AntC
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: Equality constraints (~): type-theory behind them

2019-03-25 Thread Anthony Clayden
> On Mon, Dec 10, 2018 at 8:36:42 AM Tom Schrijvers wrote:
> Maybe our Haskell'17 paper about Elaboration on Functional Dependencies sheds
some more light on your problem:

Thanks Tom, I've also been working through the 2011 expanded version of 'System
F with Type Equality Coercions' that Adam suggested.

I'm finding your 2017 paper's proposals not up to the job, because it
doesn't consider FunDeps for Overlapping Instances; and unnecessary because
the examples it's addressing can be fixed purely using FunDeps, with their
semantics per the 2006 'via CHRs' paper. The chief problems with using
FunDeps in GHC code is GHC doesn't follow that semantics; neither does it
follow any other well-principled/documented semantics.You can get it to
accept a bunch of instances then find that taken together they're not
consistent and confluent, in the sense of the 'via CHRs' paper.

Specifically, re the 2017's paper Challenge 1: Enforcing (essentially Trac
#10675). That set of instances in the example is not valid by the 'via
CHRs' rules, and should be rejected. Hugs indeed rejects that code. GHC
accepts it and #10675 shows what sort of incoherence can result. I'm not
seeing we need to translate to Type Families/System FC to explain it.

Re Challenge 2: Elaborating (which is Trac #9627 that gives the main
example for the paper). We can fix that code with an extra `~` constraint:

class C a b | a -> b

instance C Int Bool


f :: (C Int b, b ~ Bool) => b -> Bool

f x  = x

(Or in Hugs with a `TypeCast` instead of the `~`.)


Is that extra constraint onerous? The signature already has `Bool`
hard-coded as the return type, so I don't see it's any more onerous.

Or put that signature as

f :: (C Int b) => b -> b

Which also has the effect `b` must be `Bool`.


I'd far rather see GHC's implementation of FunDeps made more coherent
(and learning from Hugs) than squeezing it into the straitjacket of
System FC and thereby lose expressivity.


AntC
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Final steps in GHC's Trac-to-GitLab migration

2019-03-12 Thread Anthony Clayden
>On March 6, 2019 11:21:34 UTC, Ben Gamari wrote:

>>On March 6, 2019 6:11:49 AM EST, Ara Adkins wrote:>>*One question: what is 
>>happening with the trac tickets mailing list? I
*>>*imagine it’ll be going away, but for those of us that use it to keep
*>>*track of things is there a recommended alternative?
*>>

Yes I look at that as a way to keep a finger lightly on the pulse -
chiefly for what users think ghc should be doing.

>The ghc-commits list will continue to work.

But that's only fixes, not explanations.

>The ghc-tickets list is a good question. I suspect that under gitlab there 
>will be less need for this list but we may still want to continue maintaining 
>it regardless for continuity's sake. Thoughts?


The gitlab issues list can be sorted by latest update, which should be
roughly equiv. But it's showing weird stuff at the moment. (Like
decades-old closed tickets with activity 1 day ago.) I guess that's a
'feature' of the migration(?)


AntC
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Constraints on type synonyms

2021-06-21 Thread Anthony Clayden
Consider

>pattern PPoint :: Floating a => a -> a -> (a, a)   --  => ()
>pattern PPoint x y = (x, y)
>originP = PPoint 0 0
>scaleP w (PPoint x y) = PPoint (w*x) (w*y)

Inferred `scaleP :: Floating a => a -> (a, a) -> (a, a)` -- so more
restrictive than the `Num a` that would be inferred usually for `scaleP`'s
`(*)` operator.

Ok what's the type-level analogue for `PPoint`, so that I can give
signatures, instances, etc for methods using it? The type is not this:

>type CPoint a = Floating a => (a, a)

(Needs `RankNTypes` to put a constraint in RHS of a type synonym.) This
seems to get accepted:

> scaleP :: a -> CPoint a -> CPoint a

Inferred `scaleP :: Floating a => a -> CPoint a -> (a, a)` -- that
un-expanded `CPoint a` hints at a lurking danger. I couldn't for example
write the equation for `scaleP` with a type annotation

>scaleP w (PPoint x y) = PPoint (w*x) (w*y) :: CPoint a

Despite the annotation giving the result type exactly per the signature,
this gives 'rigid type variable' rejections. This works (needs
`ScopedTypeVariables`)

>scaleP (w :: a) (PPoint x y) = PPoint (w*x) (w*y) :: CPoint a

What I especially want is instances and constraints:

>instance C1 (CPoint a)  -- * Illegal qualified type:
Floating a => (a, a)
>-- * In the expansion of
type synonym `CPoint'
>instance (C1 (CPoint a)) => C2 a  --* Illegal qualified type:
Floating a => (a, a)
>--  GHC doesn't yet
support impredicative polymorphism

(Needs `FlexibleInstances` to appear in the instance head,
`FlexibleContexts` to appear in the constraint.) But I don't want to embed
a qualified type in the instance head nor constraints, I want the expansion
to be

>instance (Floating a) => C1 (a, a)
>instance (C1 (a, a), Floating a) => C2 a

It's kinda the constraint is 'outside' the constructed type; as if the
synonym were given by

>type (Floating a) => CPoint a = (a, a)

With the idea that when the compiler is expanding `CPoint a`, it floats the
associated constraint out to the context the synonym call appears in. That
is, I'm treating the expansion as very early in the compiler pipeline, just
after syntax checking, before type inference/checking.

If that constraint-outside syntax in the type synonym decl seems familiar,
you might have seen it discussed briefly in [Jones, Morris, Eisenberg 2020
Partial Type Constructors], Section 7 Related Work 'Datatype contexts in
Haskell'. Let me emphasise I'm talking here about type synonyms _not_
Datatypes.

That constraint-outside syntax got rejected very early in the history of
Haskell, apparently because nobody could give an adequate specification. Is
my expand-it-as-syntax proposal not viable?

AntC
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


How to user-define a type equality constraint?

2021-04-01 Thread Anthony Clayden
Type equality `(~)` is a fine constraint. It's mildly annoying I need
either `-XGADTs` or `-XTypeFamilies` to use it -- because I don't otherwise
need those extensions. OTOH it's not H2010 so it needs to be switched on
somehow.

I see the Committee is discussing what to do. It's to be enabled by
`-XTypeOperators`. That would be just as annoying for me -- I don't
otherwise use Type Operators. If TypeOperators is to be the default for
GHC2021, that more or less convinces me against GHC2021. (I'm not a
newbie/I want to fine-tune which extensions I'm using.)

@int-index says
https://mail.haskell.org/pipermail/ghc-steering-committee/2021-April/002373.html

> What convention do you have in mind? (~) is not that different from, say,
(+), which is a valid type operator:

I believe `(~)` is different in that I can't define it in user code(?)
Indeed:

> (~) is a special, magical constraint, like Coercible and Typeable. ...,
the idea is to (eventually) require an explicit import of
Data.Type.Equality to indicate the use of (~) in the code,

I'd be happy to not import `(~)`/not use TypeOperators, and define an
ordinary typeclass/constraint, say `TypeCast a b`. There's a `TypeCast`
defined in HList 2004, but it needs FunDeps; more awkwardly it needs a
method and for that method to appear in term-level code.

Can I user-define a conventional type-class that behaves more like `(~)`?

(The "convention I have in mind" is that already in H2010,`~` is a special
symbol for irrefutable patterns. Then I'd prefer constraint `(~)` is also
special syntax; and at term level I'd prefer it be available as a monadic
prefix operator.)

AntC
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: How to user-define a type equality constraint?

2021-04-06 Thread Anthony Clayden
> But why does this matter?

Because I want the semantics of that equality constraint, without switching
on any of these, which I don't otherwise use:

GADTs
TypeFamilies
TypeOperators

And if that means I can't use infix `~` in my constraints, I'll put up with
that. (I'd user-define a conventional class, say `TypeCast`. Could be that
has to be defined in terms of `~`, in which case I'd put that in a shim
module as the only place with those extensions.)
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Pattern synonym 'Required' constraints === Datatype Contexts(?)

2021-03-09 Thread Anthony Clayden
I must be slow on the uptake. I've just grokked this equivalence -- or is
it? Consider

>data Eq a => Set a = NilSet | ConsSet a (Set a) -- from the
Language report
>
>-- ConsSet :: forall a. Eq a => a -> Set a => Set a   -- inferred/per
report
>
>--  equiv with Pattern syn 'Required' constraint
>data Set' a = NilSet' | ConsSet' a (Set' a) -- no DT context
>
>pattern ConsSetP :: (Eq a) => () => a -> (Set' a) -> (Set' a)
>pattern ConsSetP x xs = ConsSet' x xs
>
>ffP ((ConsSet x xs), (ConsSetP y ys)) = (x, y)
>
>-- ffP :: forall {a} {b}. (Eq a, Eq b) => (Set a, Set' b) -> (a, b)
 -- inferred

The signature decl for `ConsSetP` explicitly gives both the Required `(Eq
a) =>` and Provided `() =>` constraints, but the Provided could be omitted,
because it's empty. I get the same signature for both `ConsSetP` as
`ConsSet` with the DT Context. Or is there some subtle difference?

This typing effect is what got DT Contexts called 'stupid theta' and
deprecated/removed from the language standard. ("widely considered a
mis-feature", as GHC is keen to tell me.) If there's no difference, why
re-introduce the feature for Patterns? That is, why go to the bother of the
double-context business, which looks weird, and behaves counter to usual
signatures:

>foo :: (Eq a) => (Show a) => a -> a
>--   foo :: forall {a}. (Eq a, Show a) => a -> a -- inferred

There is a slight difference possible with Pattern synonyms, compare:

>pattern NilSetP :: (Eq a) => () => (Set' a)
>pattern NilSetP = NilSet'
>
>-- NilSetP :: forall {a}. Eq a => Set' a -- inferred
>-- NilSet   :: forall {a}.  => Set a   --
inferred/per report

Using `NilSetP` somewhere needs giving an explicit signature/otherwise your
types are ambiguous; but arguably that's a better discipline than using
`NilSet` and allowing a Set with non-comparable element types.

AntC
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: Pattern synonym 'Required' constraints === Datatype Contexts(?)

2021-03-12 Thread Anthony Clayden
Thank you Richard, Lennart, *Gergő*

>* pattern Positive :: (Ord a, Num a) => a*

>* pattern Positive <- ((>0) -> True)*


*Heh heh, there's another surprise/undocumented 'feature'.*

*It's not necessary to give a signature for pattern `Positive`, GHC
will infer that from the decl.*

*I was surprised to see `True`, and even more surprised there wasn't a
`Bool` in the signature.  I guess that's so `Positive` can appear as a
pattern in a case expr. To dig out the positive value in a lambda
expr, it seems I go*


*> (\p@Positive -> p) 5 -- returns 5*


*Seems I can't use any trick like that to turn `Positive` into
explicitly bidirectional. I also tried*


>* pattern Positive' <- ((>0) -> ())*


*But that's rejected  '"** Couldn't match expected type `Bool' with
actual type `()'".

Is the hidden `Bool` documented somewhere? (Doesn't seem to be in the
User Guide nor the wiki nor the paper, on a quick scan.)
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: InstanceSigs -- rationale for the "must be more polymorphic than"

2021-09-17 Thread Anthony Clayden
> If you would like to offer a patch for the user manual to explain this
better, that would be great.

Thank you Simon for the invitation.

On further investigation https://gitlab.haskell.org/ghc/ghc/-/issues/20357,
what I'd like the user manual to say is:

"InstanceSigs is a mis-feature. Don't use it. It is less confusing to just
give no signature at all. If you really, really want to bind tyvars, use
PatternSignatures."
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Was: [Haskell-cafe] Haskell reference documentation, laws first or laws last?

2021-09-19 Thread Anthony Clayden
(Moving this discussion to glasgow-users. It's just not appropriate on the
cafe.)


> I am no longer a novice, and yet would still have a hard time making any use
of the laws as written in constructing instances. Instead, I'd ignore the
laws and write a natural intuitive instance, and it would invariably work.

Seems my approach is very similar to Viktor's. My (very informal)
understanding of the Laws looks nothing like the docos. I regard Foldable
structures as merely more efficient ways to hold a List. Then I expect
'moral equivalences':

> toList . fromList ~=~ id -- going via the Foldable structure
> fromList . toList ~=~ id
> toList ~=~ foldr (:) []

But those aren't equalities. 'moral equivalence' means the Lists have the
same elements, not necessarily in the same order; the structures have the
same elements but possibly in a different arrangement -- that is, in the
`Tree` example, there might be `Empty` scattered about, and elements held
variously in `Leaf`s vs `Node`s. So more accurately:

> fromList . toList . fromList === fromList -- i.e. there's a 'canonical'
arrangement
> toList . fromList . toList === toList -- i.e. there's a 'canonical' List
ordering

(That triple-journey business is a similar style to defining Lattice
pseudocomplements https://en.wikipedia.org/wiki/Pseudocomplement#Properties --
if I can chuck in some math theory.)

I'd expect all other methods to be one of: `reduceStuff === reduceStuff .
toList` or `mapStuff === fromList . mapStuff . toList`.

But! there's no method `fromList` in Foldable. Why not?/please explain.
(Are there Foldable structures which we can't load from a List? At least
assuming the List is finite.) `fromList` is the first thing I write after
declaring the datatype, so I can easily load up some test data. There is
one example `fromList` in the doco. Is that not generalisable? `foldMap
Leaf` would be brutal, but should work? `foldMap singleton` ? (But there's
no method `singleton`.)
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


InstanceSigs -- rationale for the "must be more polymorphic than"

2021-08-08 Thread Anthony Clayden
I can't help but feel InstanceSigs are either superfluous or upside-down.
It's this bit in the User Guide:

> The type signature in the instance declaration must be
> more polymorphic than (or the same as) the one in the class declaration,
> instantiated with the instance type.

Usually if you give a signature, it must be _less_ polymorphic (or the same
as) the type inferred from the term:

>lessPolyPlus :: Integral a => a -> a -> a
>lessPolyPlus x y = x + y

Or

>lessPolyPlus (x :: a) y = x + y :: Integral a => a

The examples in the User Guide aren't helping: you could just drop the
InstanceSigs, and all is well-typed. (Even the example alleging to use
-XScopedTypeVariables in a where sub-decl: you could just put random `xs ::
[b]` without scoping `b`.)

Dropping the Sigs altogether works because the type from the class decl,
suitably instantiated, is less polymorphic than inferred from the term. IOW
the suitably instantiated type restricts what would otherwise be inferred.
Situation normal.

I suppose it might be helpful to give an explicit InstanceSig as 'belt and
braces' for the instantiated -- possibly because the instantiation is hard
to figure out; possibly because you want to use -XScopedTypeVariables
within a where-bound sub-decl, as an extra piece of string.

I can see you mustn't make the InstanceSig _less_ polymorphic than the
suitably instantiated.

But the docos don't give any example where it's essential to provide an
InstanceSig _and_  make it strictly more polymorphic. Here all the sigs and
annotations are just superfluous:

>maxPolyPlus :: Num a => a -> a -> a
>maxPolyPlus = (+)
>
>class C a  where foo :: a -> T a
>instance Integral a => C a  where
>  foo :: Num a => a -> T a
>  foo (x :: a) = MkT (maxPolyPlus x x :: Num a => a)

Is there a persuasive example (to put in the User Guide)?

AntC
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: InstanceSigs -- rationale for the "must be more polymorphic than"

2021-08-10 Thread Anthony Clayden
On Tue, 10 Aug 2021 at 21:15, Simon Peyton Jones 
wrote:

>
>
> I think you see why the instance sig must be at least as polymorphic ...
>

Thanks Simon, I do now see, but I'd have to say there's a heck of lot of
questions on StackOverflow (most not from me) being surprised/asking why.
See more below.


>
>
I agree with David’s (1) and (2) reasons, but not with (3) or (4), neither
> of which I quite understand.
>
>
>

(1) is documented in the User Guide, but the magic words "compiler-checked"
don't appear.

(2) is documented and illustrated, but in that example the ScopedTyVar is
superfluous.


> There is no compelling reason to make the instance signature *more*
> polymorphic
>

I think there's an ergonomics reason (which is illustrated but not
explained):

(5) If the instance is constrained, there's no need to repeat the
constraints in the InstanceSig, so reducing clutter/making the Sig easier
to read (as you say). This results in making the InstanceSig more
polymorphic:

>data T a = MkT a a>instance Eq a => Eq (T a) where*>   *   (==) :: T a 
> -> T a -> Bool   -- The signature*>   *   (==) (MkT x1 x2) (MkTy y1 y2) = 
> x1==y1 && x2==y2

That Sig could be

*>*  (==) :: Eq a => T a -> T a -> Bool   -- repeat constraint, but clutter



> – that is, doing so does not increase expressiveness.  But it might make a
> briefer, more comprehensible type for the reader.  The only alternative
> would be to insist that the two types are the same, a completely redundant
> test, but one you might conceivably want on stylistic grounds.
>

More to the point: it would bring into scope the ScopedTyVars.


>
>
> All in all, no big deal.  Instance signature are a convenience, never a
> necessity.
>
>
>
> If you would like to offer a patch for the user manual to explain this
> better, that would be great.
>
>
>
>
[Discussion cont. from first para]:

 ... [more polymorphic] than the instantiated signature from the class –
> because that’s what the client is going to expect.  We are building a
> record of functions, and they must conform to the class signature.
>

And that answer is unsatisfactory. It amounts to: we didn't think in
advance how useful it would be to put tighter constraints on methods --
particularly for the tyvars that are 'private' to the method/not from the
instance head -- that is within Constructor classes. Allowing that would be
a partial (heh, heh) way towards 'Restricted Data Types' Hughes 1999 and
Eisenberg et al 2020 (ref'ing many earlier attempts), and indeed Haskell
Language Report vintage 1990 (the design before 'stupid theta').

I suppose it's beyond a wild dream to allow the InstanceSig to be at least
as polymoprhic wrt the types/vars from the instance head but possibly less
polymorphic wrt the method's private types/vars? The actual type inferred
would be the mgu of the InstanceSig given with the substitution from the
instance head & constraints.




>
>
> *From:* Glasgow-haskell-users  *On
> Behalf Of *David Feuer
> *Sent:* 08 August 2021 09:37
> *To:* Anthony Clayden 
> *Cc:* GHC users 
> *Subject:* Re: InstanceSigs -- rationale for the "must be more
> polymorphic than"
>
>
>
> To the best of my knowledge, `InstanceSigs` are never strictly necessary.
> They can, however, be useful for at least four purposes:
>
>
>
> 1. To provide a compiler-checked reminder of the type.
>
> 2. To bind type variables with `ScopedTypeVariables`.
>
> 3. To generalize the type so you can use polymorphic recursion.
>
> 4. To enhance parametricitry/polymorphism for internal documentation
> purposes.
>
>
>
> The third reason is probably the main technical one to allow a more
> general signature, but the fourth is likely helpful too.
>
>
>
> On Sun, Aug 8, 2021, 3:04 AM Anthony Clayden 
> wrote:
>
> I can't help but feel InstanceSigs are either superfluous or upside-down.
> It's this bit in the User Guide:
>
>
>
> > The type signature in the instance declaration must be
>
> > more polymorphic than (or the same as) the one in the class declaration,
>
> > instantiated with the instance type.
>
>
>
> Usually if you give a signature, it must be _less_ polymorphic (or the
> same as) the type inferred from the term:
>
>
>
> >lessPolyPlus :: Integral a => a -> a -> a
>
> >lessPolyPlus x y = x + y
>
>
>
> Or
>
>
>
> >lessPolyPlus (x :: a) y = x + y :: Integral a => a
>
>
>
> The examples in the User Guide aren't helping: you could just drop the
> InstanceSigs, and all is well-typed. (Even the example alleging to use
> -XScopedTypeVariables in a where sub-decl: you could just put random `

Re: Pattern synonym constraints :: Ord a => () => ...

2021-10-06 Thread Anthony Clayden
On Wed, 6 Oct 2021 at 21:24, Simon Peyton Jones 
wrote:

>
>
> I suggest the User Guide needs an example where a constraint needed for
> matching (presumably via a View pattern) is not amongst the
> constraints carried inside the data constructor, nor amongst those needed
> for building. Then the limitations in the current design would be more
> apparent for users.
>
>
>
> The user manual
> 
> does already speak about the type of a builder, here:
>
...
   How could we make that clearer?


This point in that section of the Guide is wrong/misleading:




   - ⟨CReq⟩ are the constraints *required* to match the pattern.

 must be the union of the constraints required to match the pattern,
_plus_ required to build with the pattern -- if it is bidirectional.


Then thank you Simon, but it's the type of the _matcher_ that's
problematic. The only mechanism for getting the constraints needed for
building is by polluting the constraints needed for matching. Here's a
(crude, daft) example, using guards to 'raise' a
required-for-failing-to-build that isn't required-for-successful-building
nor for-matching

>pattern TwoNode :: (Show a, Ord a) => () => a -> a -> Tree a
   -- GHC insists on both constraints as Req'd
>
>pattern TwoNode x y  <- Node Empty x (Leaf y)   where
>  TwoNode x y | x > y = Node Empty x (Leaf y)
>  | otherwise = error (show x ++ " not greater " ++ show y)

To quote you from May 1999

>   But when you take a constructor *apart*, the invariant must hold
>   by construction: you couldn't have built the thing you are taking
>   apart unless invariant held.  So enforcing the invariant again is
>   redundant; and in addition it pollutes the type of selectors.


`Show a` must have "held by construction" of the `Node`. But the PatSyn's
constraints are requiring more than that was true in some distant line of
code:  it wants *evidence*  in the form of a dictionary at the point of
deconstructing; since the build was successful, I ipso facto don't want to
`show` anything in consuming it. An `instance Foldable Tree` has no
mechanism to pass in any such dictionaries (which'll anyway be redundant,
as you say).
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Pattern synonym constraints :: Ord a => () => ...

2021-10-03 Thread Anthony Clayden
Thank you to Richard for the Tweag tutorials on Pattern Synonyms. That
third one on Matchers was heavy going. I didn't find an answer (or did I
miss it?) to something that's bugging me:

>pattern  SmartConstr :: Ord a => () => ...

Seems to mean:

* Required constraint is Ord a  -- fine, for building
* Provided constraint is Ord a  -- why? for matching/consuming

I'm using `SmartConstr` with some logic inside it to validate/build a
well-behaved data structure. But this is an ordinary H98 datatype, not a
GADT.

I don't want to expose the datatype's underlying constructor, because
client code could break the abstraction/build an ill-formed data structure.

If I pattern-match on `SmartConstr`, the consuming function wants `Ord a`.
But I can't always provide `Ord a`, because this isn't a GADT. And the
client's function could be doing something that doesn't need `Ord a` --
like counting elements, or showing them or streaming to a List, etc.

This feels a lot like one of the things that's wrong with 'stupid theta'
datatype contexts.

My work-round seems to be to define a second `ReadOnlyConstr` without
constraints, that's unidirectional/ can't be used for building.

For definiteness, the use case is a underlying non-GADT constructor for a
BST

>  Node :: Tree a -> a -> Tree a -> Tree a
>
>pattern SmartNode :: Ord a => () => Tree a -> a -> Tree a -> Tree a

with the usual semantics that the left Tree holds elements less than this
node. Note it's the same `a` with the same `Ord a` 'all the way down' the
Tree.

If some function is consuming the Tree, it can provide constraints for its
own purposes:

>member   :: Ord a => a -> Tree a -> Bool
>dumbElem :: Eq a  => a -> Tree a -> Bool
>max  :: Ord a => Tree a -> a

(That again is the same thinking behind deprecating datatype contexts.)

>countT (SmartNode l x r) = countT l + 1 + countT r   --
why infer Ord a => ?
>
>class FancyShow t  where
>  fancyShow :: Show a => Int -> t a -> String
>instance FancyShow Tree  where
>  fancyShow indent (SmartNode l x r) = ... -- rejected: Could
not deduce Ord a
>

(Ref the parallel thread on Foldable: client code can't declare an instance
for a Constructor class using SmartConstr.)

I can see commonly your Provided would be at least the constraints inside
the GADT constructor. But why presume I have a GADT?  (And yes I get that a
devlishly smart pattern might be building different GADT constrs/various
constraints, so this is difficult to infer.)

AntC
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: Pattern synonym constraints :: Ord a => () => ...

2021-10-05 Thread Anthony Clayden
Thank you Richard (and for the reply to a similar topic on the cafe).

What I meant by the comparison to 'stupid theta' is that GHC's
implementation of datatype contexts used to be mildly useful and moderately
sensible. Then it went stupid, following this 'Contexts on datatype
declarations' thread:
http://web.archive.org/web/20151208175102/http://code.haskell.org/~dons/haskell-1990-2000/threads.html#04062

With the benefit of a great deal of hindsight, and noting that
PatternSynonyms makes explicit there's a builder vs a matcher, I'd say SPJ
was right and Wadler was wrong.

In particular consider trying to implement a Foldable instance (or for any
constructor class): the instance head doesn't reveal the 'content type'
`a`, so there's no way to provide `Ord a`. _But we don't need_ `Ord a`,
because the fold is consuming the content, not building a Foldable
structure. As SPJ says:

> *pattern-matching* on MkT as *eliminating* a constraint. But since the
dictionary isn't stored in the constructor we can't eliminate it.

So at first sight, I was hoping that a PatternSyn

>pattern SmartConstr :: Ord a => () => blah

would give the pre-May 1999 GHC behaviour for matching -- that is, Provide
nothing, and therefore ask for nothing if the constructor/PatSyn appears
only in a matcher position.

I'm afraid that I don't get your answer wrt PatternSyns that wrap H98
datatypes: how do I demonstrate a difference between the above sig vs:

>pattern SmartConstr :: Ord a => Ord a => blah

It seems to me the shorthand form (with a single `=>`) should be equiv to
the latter, allowing the former nothing-Provided `Ord a => () => blah` to
mean something different.

You say "the design was too complicated". And yet SPJ on that thread says "This
is the simpler choice".

>  you want a separate smartNode function (not a pattern synonym)

I might as well have a SmartNode PatSyn and (unidirectional)
MatcherOnlyNode PatSyn -- which was my work-round in the O.P. Can't you see
that's dysergonomic?

AntC

On Wed, 6 Oct 2021 at 05:32, Richard Eisenberg  wrote:

>
>
> On Oct 3, 2021, at 5:38 AM, Anthony Clayden 
> wrote:
>
> >pattern  SmartConstr :: Ord a => () => ...
>
> Seems to mean:
>
> * Required constraint is Ord a  -- fine, for building
>
>
> Yes.
>
> * Provided constraint is Ord a  -- why? for matching/consuming
>
>
> No. Your signature specified that there are no provided constraints:
> that's your ().
>
>
> I'm using `SmartConstr` with some logic inside it to validate/build a
> well-behaved data structure. But this is an ordinary H98 datatype, not a
> GADT.
>
>
> I believe there is no way to have provided constraints in Haskell98. You
> would need either GADTs or higher-rank types.
>
>
> This feels a lot like one of the things that's wrong with 'stupid theta'
> datatype contexts.
>
>
> You're onto something here. Required constraints are very much like the
> stupid theta datatype contexts. But, unlike the stupid thetas, required
> constraints are sometimes useful: they might be needed in order to, say,
> call a function in a view pattern.
>
> For example:
>
> checkLT5AndReturn :: (Ord a, Num a) => a -> (Bool, a)
> checkLT5AndReturn x = (x < 5, x)
>
> pattern LessThan5 :: (Ord a, Num a) => a -> a
> pattern LessThan5 x <- ( checkLT5AndReturn -> (True, x) )
>
>
> My view pattern requires (Ord a, Num a), and so I must declare these as
> required constraints in the pattern synonym type. Because vanilla data
> constructors never do computation, any required constraints for data
> constructors are always useless.
>
>
> For definiteness, the use case is a underlying non-GADT constructor for a
> BST
>
> >  Node :: Tree a -> a -> Tree a -> Tree a
> >
> >pattern SmartNode :: Ord a => () => Tree a -> a -> Tree a -> Tree a
>
> with the usual semantics that the left Tree holds elements less than this
> node. Note it's the same `a` with the same `Ord a` 'all the way down' the
> Tree.
>
>
> Does SmartNode need Ord a to match? Or just to produce a node? It seems
> that Ord a is used only for production, not for matching. This suggests
> that you want a separate smartNode function (not a pattern synonym) and to
> have no constraints on the pattern synonym, which can be unidirectional
> (that is, work only as a pattern, not as an expression).
>
> It has been mooted to allow pattern synonyms to have two types: one when
> used as a pattern and a different one when used as an expression. That
> might work for you here: you want SmartNode to have no constraints as a
> pattern, but an Ord a constraint as an expression. At the time, the design
> with two types was considered too complicated and abandoned.
>
> Does this help?
>
> Richard
>
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: Pattern synonym constraints :: Ord a => () => ...

2021-10-05 Thread Anthony Clayden
Thank you. Yes that proposal seems in 'the same ball park'. As Richard's
already noted, a H98 data constructor can't _Provide_ any constraints,
because it has no dictionaries wrapped up inside. But I'm not asking it to!

The current PatSyn signatures don't distinguish between
Required-for-building vs Required-for-matching (i.e.
deconstructing/reformatting to the PatSyn). This seems no better than
'stupid theta': I'm not asking for any reformatting to pattern-match, just
give me the darn components, they are what they are where they are.

I'm afraid none of this is apparent from the User Guide -- and I even
contributed some material to the Guide, without ever understanding that.
Before this thread, I took it that 'Required' means for building -- as in
for smart constructors. So PatSyns aren't really aimed to be for smart
constructors? I should take that material out of the User Guide?


AntC


On Wed, 6 Oct 2021 at 10:53, Richard Eisenberg  wrote:

> You're right -- my apologies. Here is the accepted proposal:
> https://github.com/ghc-proposals/ghc-proposals/blob/master/proposals/0042-bidir-constr-sigs.rst
>
> Richard
>
> On Oct 5, 2021, at 12:38 PM, David Feuer  wrote:
>
> To be clear, the proposal to allow different constraints was accepted, but
> integrating it into the current, incredibly complex, code was well beyond
> the limited abilities of the one person who made an attempt. Totally
> severing pattern synonyms from constructor synonyms (giving them separate
> namespaces) would be a much simpler design.
>
> On Tue, Oct 5, 2021, 12:33 PM Richard Eisenberg 
> wrote:
>
>>
>>
>> On Oct 3, 2021, at 5:38 AM, Anthony Clayden 
>> wrote:
>>
>> >pattern  SmartConstr :: Ord a => () => ...
>>
>> Seems to mean:
>>
>> * Required constraint is Ord a  -- fine, for building
>>
>>
>> Yes.
>>
>> * Provided constraint is Ord a  -- why? for matching/consuming
>>
>>
>> No. Your signature specified that there are no provided constraints:
>> that's your ().
>>
>>
>> I'm using `SmartConstr` with some logic inside it to validate/build a
>> well-behaved data structure. But this is an ordinary H98 datatype, not a
>> GADT.
>>
>>
>> I believe there is no way to have provided constraints in Haskell98. You
>> would need either GADTs or higher-rank types.
>>
>>
>> This feels a lot like one of the things that's wrong with 'stupid theta'
>> datatype contexts.
>>
>>
>> You're onto something here. Required constraints are very much like the
>> stupid theta datatype contexts. But, unlike the stupid thetas, required
>> constraints are sometimes useful: they might be needed in order to, say,
>> call a function in a view pattern.
>>
>> For example:
>>
>> checkLT5AndReturn :: (Ord a, Num a) => a -> (Bool, a)
>> checkLT5AndReturn x = (x < 5, x)
>>
>> pattern LessThan5 :: (Ord a, Num a) => a -> a
>> pattern LessThan5 x <- ( checkLT5AndReturn -> (True, x) )
>>
>>
>> My view pattern requires (Ord a, Num a), and so I must declare these as
>> required constraints in the pattern synonym type. Because vanilla data
>> constructors never do computation, any required constraints for data
>> constructors are always useless.
>>
>>
>> For definiteness, the use case is a underlying non-GADT constructor for a
>> BST
>>
>> >  Node :: Tree a -> a -> Tree a -> Tree a
>> >
>> >pattern SmartNode :: Ord a => () => Tree a -> a -> Tree a -> Tree a
>>
>> with the usual semantics that the left Tree holds elements less than this
>> node. Note it's the same `a` with the same `Ord a` 'all the way down' the
>> Tree.
>>
>>
>> Does SmartNode need Ord a to match? Or just to produce a node? It seems
>> that Ord a is used only for production, not for matching. This suggests
>> that you want a separate smartNode function (not a pattern synonym) and to
>> have no constraints on the pattern synonym, which can be unidirectional
>> (that is, work only as a pattern, not as an expression).
>>
>> It has been mooted to allow pattern synonyms to have two types: one when
>> used as a pattern and a different one when used as an expression. That
>> might work for you here: you want SmartNode to have no constraints as a
>> pattern, but an Ord a constraint as an expression. At the time, the design
>> with two types was considered too complicated and abandoned.
>>
>> Does this help?
>>
>> Richard
>> ___
>> Glasgow-haskell-users mailing list
>> Glasgow-haskell-users@haskell.org
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users
>>
>
>
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: Pattern synonym constraints :: Ord a => () => ...

2021-10-05 Thread Anthony Clayden
Thanks Gergö, I've read that paper many times (and the User Guide). Nowhere
does it make the distinction between required-for-building vs
required-for-matching. And since most of the syntax for PatSyns (the
`where` equations) is taken up with building, I'd taken it that "required"
means required-for-building.

There is one paragraph towards the end of section 6 that kinda hints at the
issue here. It's so cryptic it's no help. "An alternative would be to carry
two types for each pattern synonym: ...". But already PatSyns carry two
sets of _constraints_. The matrix type after the constraints is determined
by the mapping to/from the data constructor. Why would there be two of
those? What this paragraph might mean (?) is 'carry three sets of
constraints', but put one set in a completely different signature. As per
the proposal.

>  they [Required constraints] are "required" to be able to use the pattern
synonym.

Is highly misleading. Only as a result of this thread (not from the User
Guide nor from the paper) do I discover "use" means match-on. The paper
really does not address typing for "use" for building. I agree with SPJ's
comment (quoted in the proposal) "This turns out to be wrong in both
directions."

I suggest the User Guide needs an example where a constraint needed for
matching (presumably via a View pattern) is not amongst the
constraints carried inside the data constructor, nor amongst those needed
for building. Then the limitations in the current design would be more
apparent for users.

Perhaps I'm just stupid, and should be disqualified from using such
features. (I keep away from GADTs for those reasons.) So I'm not going to
volunteer to revise the User Guide further.


On Wed, 6 Oct 2021 at 15:26, Gergő Érdi  wrote:

> If you haven't yet, it is probably a good idea to read section 6 of
> https://gergo.erdi.hu/papers/patsyns/2016-hs-patsyns-ext.pdf
>
> On Wed, Oct 6, 2021 at 10:23 AM Gergő Érdi  wrote:
> >
> > > I'm afraid none of this is apparent from the User Guide -- and I even
> contributed some material to the Guide, without ever understanding that.
> Before this thread, I took it that 'Required' means for building -- as in
> for smart constructors.
> >
> > No, that's not what the required/provided distinction means at all!
> >
> > You should think of both Provided and Required in the context of
> > matching, not in the context of building. To be able to use a pattern
> > synonym to match on a scrutinee of type T, not only does T have to
> > match the scrutinee type of the pattern synonym, but you also must
> > satisfy the constraints of the Required constraints -- they are
> > "required" to be able to use the pattern synonym. On the flipside,
> > once you do use the pattern synonym, on the right-hand side of your
> > matched clause you now get to assume the Provided constraints -- in
> > other words, those constraints are "provided" to you by the pattern
> > synonym.
> >
> > It is true that the builder could have entirely unrelated constraints
> > to either (as in the proposal). The current implementation basically
> > assumes that the Provided constraints can be provided because the
> > builder put them in.
> >
> > Does this make it clearer?
> >
> > On Wed, Oct 6, 2021 at 10:13 AM Anthony Clayden
> >  wrote:
> > >
> > >
> > > Thank you. Yes that proposal seems in 'the same ball park'. As
> Richard's already noted, a H98 data constructor can't _Provide_ any
> constraints, because it has no dictionaries wrapped up inside. But I'm not
> asking it to!
> > >
> > > The current PatSyn signatures don't distinguish between
> Required-for-building vs Required-for-matching (i.e.
> deconstructing/reformatting to the PatSyn). This seems no better than
> 'stupid theta': I'm not asking for any reformatting to pattern-match, just
> give me the darn components, they are what they are where they are.
> > >
> > > I'm afraid none of this is apparent from the User Guide -- and I even
> contributed some material to the Guide, without ever understanding that.
> Before this thread, I took it that 'Required' means for building -- as in
> for smart constructors. So PatSyns aren't really aimed to be for smart
> constructors? I should take that material out of the User Guide?
> > >
> > >
> > > AntC
> > >
> > >
> > > On Wed, 6 Oct 2021 at 10:53, Richard Eisenberg 
> wrote:
> > >>
> > >> You're right -- my apologies. Here is the accepted proposal:
> https://github.com/ghc-proposals/ghc-proposals/blob/master/proposals/0042-bidir-constr-sigs.rst
> > >>
> > >> Ric

Typing of Pattern Synonyms: Required vs Provided constraints

2022-01-05 Thread Anthony Clayden
There was an interesting exchange between the authors of Haskell compilers
back in 1999 (i.e. when there were multiple compilers)
http://web.archive.org/web/20151001142647/http://code.haskell.org/~dons/haskell-1990-2000/msg04061.html

I was trying to simulate SPJ's point of view, using PatternSynonyms.

>
>{-#  LANGUAGE DatatypeContexts, PatternSynonyms  #-}
>
>data Ord a => TSPJ a = MkTSPJ a a
>
>pattern PatTSPJ :: (Ord a) => () => a -> a -> TSPJ a-- empty
Provided
>pattern PatTSPJ x y = MkTSPJ x y
>
>unPatTSPJ :: TSPJ a -> (a, a)-- no
constraints
>unPatTSPJ (PatTSPJ x y) = (x, y)
>

`unPatSPJ`'s binding rejected `No instance for (Ord a) arising from a
pattern`. If I don't give a signature, inferred `unPatTSPJ :: Ord b => TSPJ
b -> (b, b)`.

Taking the DatatypeContext off the `data` decl doesn't make a difference.

So Pattern syns seem to be giving exactly the 'stupid theta' behaviour.

The User Guide
https://downloads.haskell.org/~ghc/8.10.7/docs/html/users_guide/glasgow_exts.html#typing-of-pattern-synonyms
says


   -

   ⟨CProv⟩ are the constraints *made available (provided)* by a successful
   pattern match.

But it doesn't mean that. It's more like "⟨CProv⟩ are the constraints made
available *in addition to ⟨CReq⟩* (provided union required) by a successful
pattern match."

Or ... is there some way to simulate the up-to-1999 GHC behaviour? (I
happen to agree with SPJ; Wadler wasn't thinking it through; consider for
example constructor classes over type constructors with constraints:
there's nothing in the instance head for the constraint to attach to. Now
that we have GADTs -- which are appropriate for a different job -- that DT
Contexts perhaps were being (ab-)used for -- I agree more strongly with
SPJ.)

With GADTs I could get a `unGSPJ` that doesn't expose/provide the
constraint, but it does that by packing the constraint/dictionary
(polluting) inside the data constructor. Not  what I want. As SPJ says
down-thread

 "when you take a constructor *apart*, the invariant must hold
by construction: you couldn't have built the thing you are taking
apart unless invariant held.  So enforcing the invariant again is
redundant; and in addition it pollutes the type of selectors."


AntC
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: Typing of Pattern Synonyms: Required vs Provided constraints

2022-01-06 Thread Anthony Clayden
On Fri, 7 Jan 2022 at 09:08, Richard Eisenberg  wrote:

>
>
> On Jan 5, 2022, at 9:19 PM, Anthony Clayden 
> wrote:
>
> So Pattern syns seem to be giving exactly the 'stupid theta' behaviour.
>
>
> In your example, yes: the Required context is "stupid" in the way that
> "stupid theta" is. The reason to have a Required context is if your pattern
> synonym does computation that requires a constraint. ...
>

I don't think that's the only (or even the chief) reason. Wadler's response
on that 1999 thread is telling

"Often, a type will make no sense without the constraints; e.g., an
association list type Alist a b makes no sense unless Eq a holds. The class
constraints on data declarations were a simple way for the user to ask the
compiler to enforce this invariant. They have compile-time effect only, no
effect whatsoever on run-time (in particular, no dictionaries should be
passed).  "

And yet he's somehow forgotten his own design for constraints: the _only_
way to enforce a constraint is by providing evidence in the form of a
dictionary. (This is especially impossible for a constructor class where
it's the (invisible) argument type to the constructor that wants the
constraint.) That's why I agree with SPJ 1999 here: on a pattern match
there's no mechanism to pass in a constraint (dictionary-as-evidence); and
you're not doing computation in merely matching -- it doesn't even need
`Eq`.

In a pattern

>baz (MkT 3 3) = "Three"   -- desugars to
>baz (MkT x y) | x == fromInteger #3#, y == fromInteger #3# = "Three"

It's the appearance of the `==` and `fromInteger` that Require `(Eq a, Num
a)`; I don't expect them Provided by `T`'s nor `MkT`'s Datatype context.
And in general, guards can induce arbitrary computation; it's not the
responsibility of the datatype declarer to anticipate that. (A Separation
of Concerns argument.)

So I want constraints 'Required' for building -- that is constraints
'sighted' whether or not any computation involved; but not 'Provided' by
matching -- because they're already sighted by the very fact of having
built the pattern (to paraphrase SPJ1999). Thanks for pointing out that
proposal. I guess it'll achieve what I want for a BST or Wadler's assoc
list; would be nice to see an explicit example.

I don't want to use a GADT here, because it's the same constraint on every
data constructor; and a BST recurses with the same constraint; holding
identical dictionaries inside each node is just clutter. Something
accessing the BST (probably) needs to bring its `Ord` dictionary with it
(`elem` for example); but not necessarily (`count`, `toList` or `show`).



>
>
> The User Guide
> https://downloads.haskell.org/~ghc/8.10.7/docs/html/users_guide/glasgow_exts.html#typing-of-pattern-synonyms
>  says
>
>
>- ⟨CProv⟩ are the constraints *made available (provided)* by a
>successful pattern match.
>
> But it doesn't mean that. It's more like "⟨CProv⟩ are the constraints
> made available *in addition to ⟨CReq⟩* (provided union required) by a
> successful pattern match."
>
>
> I agree with the user guide here: the Provided constraints are made
> available. The Required constraint must *already* be available before the
> pattern match,
>

I disagree. To "be available" requires a dictionary. A pattern synonym gets
desugarred, it doesn't hold a dictionary itself. The dictionary *was*
"already available" at the time of building; but it's no longer available
unless the PatSyn got desugarred to a GADT with that dictionary inside. I
want these PatSyns desugarred to ordinary H98 data constructors.

And I want an empty Provided `:: (Ord a) => () => a -> a -> T a` to mean
nothing is Provided, not even what was Required.


>
>
> To get the pre-1999 behavior, you would need a different type for a
> pattern synonym used as a pattern than for one used as an expression. This
> is the subject of the accepted-but-not-implemented
> https://github.com/ghc-proposals/ghc-proposals/blob/master/proposals/0042-bidir-constr-sigs.rst
>
>
>


AntC
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Fwd: Typing of Pattern Synonyms: Required vs Provided constraints

2022-01-06 Thread Anthony Clayden
Thanks Gergö, I do find Richard's tendency to use ViewPattern examples
distracts me from understanding the point. That arrow-to-nowhere or
arrow-to-the-wrong-value syntax is nausea-inducing.

Your "instructive" example is really nothing to do with PatSyns, just
ordinary Haskell numeric patterns: the Language Report (section 3.17.2
point 7., same wording in 2010 and '98) says a numeric literal in a pattern
gets desugared to an `(==)` test against `fromInteger  `. So
that explains where the constraints come from. (Annoyingly, the 
you can't express in legit Haskell source.)

Avoiding distractions is why my o.p. used an `Ord` constraint, also to
follow SPJ's 1999 example. And the canonical example in those old
discussions (like Hughes' 1999 paper) is an `Ord` constraint for elements
of a `Set` structured as a BST.

With your "polymorphic literal" how do I follow the Language Report to
desugar? I'd rather be able to write Richard's example using an as-Pattern
like one of these:

>pattern x@Is3 | x == fromInteger (#3#) = x   -- where #3# is the
inexpressible 
>pattern x@3@Is3 = x

(It's not that I'm "new to patsyn typing"; it's more that patsyn typing and
design keeps falling short of what I hope from such a promising feature. So
I always want to check I'm not mis-understanding. Specifically, the design
for Pattern Synonyms was a lost opportunity to expunge ViewPatterns
entirely.)

AntC

On Fri, 7 Jan 2022 at 09:43, Gergő Érdi  wrote:

> Fwiw, a less contrived, and much more relatable, version of Richard's
> example would be
>
>
> pattern Is3 :: (Num a, Eq a) => a   -- only a Required constraint
> pattern Is3 = 3 -- a polymorphic literal!
>
> I think it can be quite instructive for people new to patsyn typing to
> work out why this is exactly the same as the one in Richard's email!
>
>
> On Thu, Jan 6, 2022, 21:11 Richard Eisenberg  wrote:
>
>>
>>
>> ...
>>
>>
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: Typing of Pattern Synonyms: Required vs Provided constraints [was cafe DatatypeContexts / alternative]

2022-01-06 Thread Anthony Clayden
[Picking up a cafe thread from February that fits here
https://mail.haskell.org/pipermail/haskell-cafe/2021-February/133459.html.]

Am 23.02.21 um 20:07 schrieb Richard Eisenberg:
>* You might be interested in my recent paper on exactly this problem:
*>* how to make DatatypeContexts actually work the way you want:
*>* https://richarde.dev/papers/2020/partialdata/partialdata.pdf
<https://richarde.dev/papers/2020/partialdata/partialdata.pdf>
*>* <https://richarde.dev/papers/2020/partialdata/partialdata.pdf
<https://richarde.dev/papers/2020/partialdata/partialdata.pdf>>.*


I think that paper is _not_ how DatatypeContexts should work -- at
least for the example in that thread of `fmap` over a data structure
wanting to preserve some invariant, such as unique elements.


   - For the matching/Provided constraints the paper at least does no
harm by magically unveiling constraints via a Well-Formed-Type
mechanism. It would equally do no harm by just not Providing any
constraints at all.
   - For the building/Required constraints: it would do harm to
magically unveil (for example) a computation to squish out duplicates
or re-order elements by their `fmap`ed result.

The Laws for `Functor.fmap` include "preserving the structure of " the
Functor. Squishing out duplicates/reordering/rebalancing breaks that.

Instead of `fmap` you should use `Foldable.foldMap :: Monoid
<https://hackage.haskell.org/package/base-4.16.0.0/docs/Data-Monoid.html#t:Monoid>
m => (a -> m) -> t a -> m`; with `m` instance of the form `Ord b =>
Monoid (t b)` -- `Ord` induced from the Datatype context of `t`. Then
there is a mechanism to pass in the `Ord` dictionary from outside/no
need for WFT magic.

I get it that Required `Ord b` is a poor stand-in for the invariant:
no duplicates; elements in ascending sequence; BST balanced. So the
Monoid instance still couldn't expose the underlying data
constructors.

Forcing to use `foldMap` at least puts it in the programmer's face
that trying to use `fmap` is a type error standing in for
law-breaking.

AntC

On Fri, 7 Jan 2022 at 15:00, Anthony Clayden 
wrote:

>
>
> On Fri, 7 Jan 2022 at 09:08, Richard Eisenberg  wrote:
>
>>
>>
>> On Jan 5, 2022, at 9:19 PM, Anthony Clayden 
>> wrote:
>>
>> So Pattern syns seem to be giving exactly the 'stupid theta' behaviour.
>>
>>
>> In your example, yes: the Required context is "stupid" in the way that
>> "stupid theta" is. The reason to have a Required context is if your pattern
>> synonym does computation that requires a constraint. ...
>>
>
> I don't think that's the only (or even the chief) reason. Wadler's
> response on that 1999 thread is telling
>
> "Often, a type will make no sense without the constraints; e.g., an
> association list type Alist a b makes no sense unless Eq a holds. The
> class constraints on data declarations were a simple way for the user to
> ask the compiler to enforce this invariant. They have compile-time effect
> only, no effect whatsoever on run-time (in particular, no dictionaries
> should be passed).  "
>
>
>
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: [ANNOUNCE] GHC 9.4.1 is now available

2022-08-14 Thread Anthony Clayden
Thanks Ben, there's a couple of broken links on the Downloads page, going
via the haskell.org/ghc 'GHC 9.4.1 Released!' link, rather than the link
below in your message:

* Release Notes
* Documentation

The urls seem to be using a different directory structure, ==> 404.

(Also the link in your message is to the binaries, not the distribution
packages.)

AntC



> The GHC developers are very pleased to announce the availability of GHC
> 9.4.1. Binary distributions, source distributions, and documentation are
> available at downloads.haskell.org:

>
>https://downloads.haskell.org/ghc/9.4.1
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


data families/newtype instances allow `DatatypeContext`s ?

2023-03-13 Thread Anthony Clayden
https://discourse.haskell.org/t/choosing-data-representation-based-on-type/5970/4?u=antc2

A `DatatypeContext` on a `newtype instance` gets the usual warning it's
deprecated. But the decl is accepted -- at GHC 8.10. (It doesn't achieve
what the O.P. was asking for -- but then I wouldn't expect it to.)

Is this behaviour deliberate, or more of an accident? I'm surprised such a
modern feature (datatype families) tolerates such an old/deprecated feature.

I suppose this means the two features are orthogonal -- which is kinda a
Good Thing (wrt some continuing discussions).

AntC
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Pattern Synonym decls only at top level

2023-04-17 Thread Anthony Clayden
" Pattern synonym declarations can only occur in the top level of a module.
In particular, they are not allowed as local definitions.  " says the User
Guide. And the 2016 paper says likewise. But there's no explanation why.

PattSyns are not necessarily declared in the same module as the underlying
Datatype, so there can be a region of scope with the data constructor but
without the PattSyn.

If PattSyns are declared in the same module as their datatype, you can
export the PattSyns but not the data constructors -- indeed that's a common
way to keep the Datatype implementation abstract.

Or (unusually) you can export the data constructors but not PattSyns. So
there seems to be no close tie-up of Datatype/constructors and PattSyns.
Then why couldn't a PattSyn be declared in local scope only?

The builder function behind a PattSyn is just like any other function, so
could be local(?) The paper's Implementation section 7 says the matcher
also is an "ordinary function".

Relatedly, PattSyns can in effect be overloaded: define the PattSyn in
terms of a ViewPattern function. That can be a method of some class. Then
why not more directly declare PattSyns within a class?

[I have a feeling I answered a similar question on some forum, saying the
PattSyn has to follow around its data constructors, like a dog on a lead.
Can't find that now. If that was me, I apologise and withdraw.]

AntC
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users