Re: The GADT debate

2016-05-08 Thread wren romano
On Sun, May 8, 2016 at 1:38 AM, Carter Schonwald
 wrote:
> Peripherally, this also brings up the notion of type equality, and I'm a bit
> fuzzy about how big a chasm there is between what that means in Haskell 2010
> vs more bleeding edge styles, or am I pointing at a shadows?

Generally speaking, notions of "equality" are a major source of
complication in any dependently typed theory. The issue isn't coming
up with a story that works, but rather choosing between a number of
different stories each of which works in slightly different ways. This
problem of "equality" is the bulk of why I think it'd be good to
postpone the GADT (and TypeFamily) debate.

One Haskell-specific example of why defining equality is hard is when
we try to handle both the "equalities" induced by dependent
case-analysis as well as the "equalities" induced by newtype
definitions. Though we use the same word for them, they're two
entirely different notions of "equality". However, since they both
exist, we must have some story for how these two notions interact with
one another (including possibly "not at all"). The new role-typing
stuff is one approach to handling this, but many people feel it's
overly complicated or otherwise not quite the right way forward. If we
add GADTs to the report, then we also need to define how to type check
dependent case-analysis, which seems to require introducing type
equality, which in turn requires specifying how the semantic notion of
type equality interacts with the operational notion of representation
equality introduced by newtypes, which is still imo an open area of
research.

If we can come up with some story that lets us make progress towards
eventually including GADTs (and TypeFamilies) while avoiding the
equality tarpit, I'm all for it. I'd just really rather avoid the
tarpit until we have other easier things squared away.

...

One possible way forward may be to introduce the syntax for (~)
constraints and defining them merely as "delayed unification
constraints". This would require including unification in the report,
but may be doable in a way that allows non-unification-based
implementations as well.

The reason I bring this possibility up now is that it helps handle
some other situations where we don't even have GADTs or TypeFamilies.
In particular, when FlexibleInstances is enabled it allows for
non-linear use of type variables in the instance head. However, this
is often not what we mean since "instance Foo (Bar i i)" means that
the two parameters to Bar must be unified *prior* to selecting that
instance; which often leads to type inference issues since we can't
resolve the instance eagerly enough. Whereas, often times what is
actually desired is "instance (i ~ j) => Foo (Bar i j)" which causes
us to select the instance immediately if the type constructor is Bar,
and then *after* committing to the instance we then try to unify the
two parameters. Thus, adding delayed unification constraints would be
helpful for adding (something like) FlexibleInstances to the report.
Of course, "delayed unification constraints" don't have any sort of
evidence associated with them like the dependent case-analysis
equalities do; so, again, we'd want to make sure to phrase things in
such a way that we don't prematurely commit to more than we intend.

-- 
Live well,
~wren
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime


Re: The GADT debate

2016-05-08 Thread wren romano
On Sun, May 8, 2016 at 11:25 AM, Richard Eisenberg  wrote:
> On May 7, 2016, at 11:05 PM, Gershom B  wrote:
>>
>> an attempt (orthogonal to the prime committee at first) to specify an 
>> algorithm for inference that is easier to describe and implement than 
>> OutsideIn, and which is strictly less powerful. (And indeed whose 
>> specification can be given in a page or two of the report).
>
> Stephanie Weirich and I indeed considered doing such a thing, which 
> conversation was inspired by my joining the Haskell Prime committee. We 
> concluded that this would indeed be a research question that is, as yet, 
> unanswered in the literature. The best we could come up with based on current 
> knowledge would be to require type signatures on:
>
> 1. The scrutinee
> 2. Every case branch
> 3. The case as a whole
>
> With all of these type signatures in place, GADT type inference is indeed 
> straightforward.

If all three of those signatures are present, doesn't that make
"inference" trivial? If I understand you correctly, the only thing to
do here would be to check the types, right?

I am curious though. Since we don't have true/anonymous type-level
functions, why do we need the signatures on every branch (assuming the
scrutinee and whole-expression types are given)? I can see why they'd
be required in Coq/Agda/etc, but it's less clear why they'd be
required in Haskell per se

-- 
Live well,
~wren
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime


Re: Limber separators

2016-05-07 Thread wren romano
On Fri, May 6, 2016 at 3:58 PM, Cale Gibbard  wrote:
> I can't really be the only one here who thinks that this kind of
> discussion of extensions to the syntax of Haskell is totally
> inappropriate when we have a large number of already implemented
> extensions to the language whose interactions with each other are
> largely undocumented. The Haskell Report isn't the place to be talking
> about new features, in my mind. If this project turns into
> bike-shedding the concrete syntax of the language, it's hard to
> imagine real progress being made on the actual problem, which is that
> the Haskell Reports as they stand are not as relevant as they should
> be with regard to the language in which we're writing programs today.


+1.

One of my big concerns here is that the proposal is vague, and
therefore impossible to judge. As an example of what I mean, what
counts as a "separator"? Is it a special case only commas? Why not
also include the vertical pipe in data type definitions? We run into
the same "difficult to one-line merge" issues when we write things
like:

data SomeReallyLongNameSoIWantToNewlineWrap
= Foo ...
| Bar ...
...

Coq and other ML-like languages allow a vertical pipe between the "="
and the first constructor name, so why shouldn't we? Or, what about
when people do parser combinator stuff and use the (<|>) operator as a
"separator", should we handle that too? If so, do we extend it to
other operators people may want as a leading separator, like (+) in
large arithmetic expressions? How should these be distinguished from
typos where an argument is missing or a section was indented?


These sorts of complexities are the reason the Haskell' committee has
always focused on things which have already been implemented. The
implementation provides (a) a concrete specification of what's being
proposed, (b) an idea of how that proposal works in the wild, (c) a
proof that it's easily implementable. Of course, in the process of
getting included into the report the details may change; but it's a
very solid starting point. I'm not adamantly opposed to proposals
which aren't already implemented, but the proposal should measure up
to the same standards— N.B., it may appear to be held to higher
standards, since people often neglect to consider the role an
implementation plays as a component of the proposal itself.

As you say, four years is a decent chunk of time. Why not spend that
time getting it implemented as an extension in GHC? The implementation
process will work out a bunch of kinks in what exactly you mean by
"separators" and how to handle leading vs trailing vs redundant
separators. Having it available will also make it clearer how many
people like and want this behavior. Both of these contribute to making
the argument that we should in fact include it in the report.

-- 
Live well,
~wren
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime


The GADT debate

2016-05-07 Thread wren romano
Hi all,

There's been some discussion about whether to consider including GADTs
in the new report, but it's been mixed up with other stuff in the
thread on incorporating extensions wholesale, which has unfortunately
preempted/preceded the discussion about how to go about having such
discussions(!).

My position on the debate is that we should avoid having the debate,
just yet. (Which I intended but failed to get across in the email
which unintentionally started this all off.) I think we have many much
lower-hanging fruit and it'd be a better use of our time to try and
get those squared away first. Doing so will help us figure out and
debug the process for having such debates, which should help the GADT
debate itself actually be fruitful. As well as making progress on
other fronts, so we don't get mired down first thing.

Whenever the debate occurs, here's a summary of the relevant emails so
that they are not lost, buried in the emails of time:


* Andres Löh puts forth criteria on how to judge whether extensions
should be included. Mentions GADTs as an example of something that, if
we decide they're a good idea in principle (criterion #1), then we
should work towards things which will help them be easier to
feasibly/sensibly specify (criterion #2)

https://mail.haskell.org/pipermail/haskell-prime/2016-May/004104.html

* wren agrees with the criteria, loves GADTs, thinks they do not fully
satisfy criterion #2

https://mail.haskell.org/pipermail/haskell-prime/2016-May/004115.html

* Simon Peyton Jones says type inference for GADTs has, in fact,
stabilized. Albeit, not entirely satisfactory because it's "a bit
operational"

https://mail.haskell.org/pipermail/haskell-prime/2016-May/004119.html

* Henrik Nilsson thinks GADTs are one of the most important extensions
to Haskell (fwiw, wren agrees). Also worries about being able to
specify type inference declaratively.

https://mail.haskell.org/pipermail/haskell-prime/2016-May/004120.html

* Dominique Devriese suggests including MonoLocalBinds, finds "let
should not be generalized" convincing

https://mail.haskell.org/pipermail/haskell-prime/2016-May/004117.html

* wren finds "let should not be generalized" unconvincing

https://mail.haskell.org/pipermail/haskell-prime/2016-May/004145.html

-- 
Live well,
~wren
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime


Re: Are there GHC extensions we'd like to incorporate wholesale?

2016-05-07 Thread wren romano
On Wed, May 4, 2016 at 3:23 AM, Herbert Valerio Riedel
<hvrie...@gmail.com> wrote:
> On 2016-05-04 at 06:48:38 +0200, wren romano wrote:
>> Speaking of which, are things like the AMP and FTP under our purview
>> or are they under the CLC?
>
> I tried to clarify in the call-for-nomination and the formation
> announcement that the library part of the Haskell Report shall be
> formally under the CL(i)C's purview

I get that. My question was more because of how various things in the
Prelude are hard-coded into the rest of the report— in particular, the
core type classes feel like they span the boundary: the Monad class is
needed for the IO type for `main`, the fromInteger method of Num and
fromRational method of Fractional are needed for handling of numeric
literals, etc.

-- 
Live well,
~wren
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime


Re: Are there GHC extensions we'd like to incorporate wholesale?

2016-05-07 Thread wren romano
On Wed, May 4, 2016 at 2:51 AM, Dominique Devriese
 wrote:
> As an outsider, I would like to suggest thinking about MonoLocalBinds.  GHC
> has a rather convincing story (at least to me) that "(local) let should not
> be generalised" (since it becomes problematic in combination with several
> other language extensions) and the journal version of the OutsideIn(X) paper
> has empirical data that indicates it is not a big problem to remove.  If
> there is a concern about backwards compatibility, perhaps the committee
> could deprecate local let generalisation in Haskell2020 and remove it in a
> subsequent iteration of the report?


FWIW, I'm against MonoLocalBinds. I understand the rational given in
the paper, but I disagree with it. In my experience the medicine is
worse than the disease.

It used to be that where-clauses where a nice clean way of organizing
code, especially for things like the worker/wrapper transform; but
with MonoLocalBinds all the benefits of where-clauses are eliminated.
For every local binding I'm forced to provide a type signature
—because part of the whole *point* of factoring things out is that
you're going to use them repeatedly, which for GADTs virtually
guarantees the uses will be at different index types and therefore
will require universal quantification— and these requisite type
signatures almost entirely duplicate information provided by the
signature for the primary/top-level binding. Indeed, in almost every
situation I end up needing to manually provide type signatures which
are identical to what let-generalization would have inferred. This
repetition is not merely annoying, it actively harms legibility and
maintainability of code.

-- 
Live well,
~wren
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime


Re: Are there GHC extensions we'd like to incorporate wholesale?

2016-05-03 Thread wren romano
On Tue, May 3, 2016 at 2:12 AM, Andres Loeh  wrote:
> Hi.
>
> Just to add a few general points. There are different dimensions to
> evaluate GHC extensions for inclusion in the standard, and just making
> lists does not really reflect that. The two most important ones, I
> think, are:
>
> 1. Whether we think they're actually a good idea or not.
>
> 2. Whether we think they're feasible to specify in a sensible way.
>
> There are variations of these points (extensions that are perhaps
> possible to specify, but ugly in their current form; extensions that
> have subtle interactions with others; ...)

I agree that these two points are what we should really be focusing on
(as well as things like ease of implementation), not just popularity.

For example, much as I love GADTs and would be all for them being
added in some future language report, I do not feel they should be
added this time around. (Though I emphatically and wholeheartedly
support adding GADTSyntax.) The primary reason being that while the
semantics of the data types themselves is easy enough to define,
there's no really sensible way of specifying how type inference should
work for them. GHC has gone back and forth with a bunch of different
inference methods over the years, and I don't think that's really
stabilized yet; so basically we'd be stuck with (a) not giving any
useful specification of what users can expect from inference / what
implementors must provide to be considered "standards compliant", or
(b) requiring some inference strategy that will stifle development
rather than help it.

Similarly, while TypeFamilies are fun to play with, I don't think
they're ready for primetime either. While in theory they are
equivalent to fundeps, in practice they're still playing catchup.
Given the choice, I'd much rather see fundeps be added to the spec,
where we make sure to phrase them in such a way that a viable
implementation could consider fundep syntax to be syntactic sugar for
some underlying story with TypeFamilies. —As for whether I'd actually
be given that choice in the first place, I know there have been issues
in the past about adding fundeps to the report. So far as I know, the
main concern is that implementing them correctly is quite intricate
(as also reflected in the intricacies of getting things like
injectivity for TypeFamilies working). But, if we think it'd be a good
idea to aim for getting some variant of type families into the
standard in a future version, then the complexity of fundeps is
already required (if we're to have a complete story for where
TypeFamilies is currently headed). So adding fundeps now does help
towards the eventual goal of adding type families; even if one day we
think of fundeps as a funny syntactic sugar, like how some of us
already consider the ADT syntax to be a funny syntactic sugar for the
more straightforward GADTSyntax.



> All this being said, I still have a personal list:
>
> BangPatterns
> ConstrainedClassMethods
> ConstraintKinds (?)
> Derive* (?)
> EmptyCase
> ExistentialQuantification
> ExplicitForAll
> ExplicitNamespaces
> ExtendedDefaultRules (?)
> FlexibleContexts
> FlexibleInstances
> GADTSyntax
> InstanceSigs
> KindSignatures
> NullaryTypeClasses
> Overloaded* (?)
> PartialTypeSignatures (?)
> RankNTypes
> ScopedTypeVariables
> StandaloneDeriving (?)
> TypeSynonymInstances
> TypeOperators (?)
>
> I probably forgot a few. For the ones listed with (?), I am aware of
> some problems, but I'd still be very happy to at least have some
> discussions about them and make some progress in the direction of
> future standardization, as I indicated above.


Many of these are on my personal hit list as well. And most of the
ones that aren't still seem fine to add; the exceptions being:

TypeSynonymInstances — I'm not a fan of this extension since it means
big warnings when teaching people Haskell (just getting them to
recognize that type-synonyms are mere shorthands and not generative
definitions is hard enough; any code using this extension requires
them to recognize that sooner). Though it people really want it...
::shrug::

ExtendedDefaultRules — may be okay, but I haven't used/needed it so
have never thought about it.

FlexibleInstances — I like this and want something very much like it,
but it's somewhat tricksy in practice so may need tweaking. In
particular, I've typically found that non-linear use of type variables
in the instance head is not in fact what people usually want/mean, and
so when such instances are given it leads to usability issues re
inference. Many times what is actually desired is to have a delayed
unification constraint, e.g. @instance (i ~ j) => Monad (IContT i j m)
where...@, where we first match the head (with only linear type
variables) and then after we've committed to selecting this instance
only then do we try to resolve the constraint. Even though the syntax
here is suggestive, I think we should be able to get away with adding
delayed unification 

Re: Chairship / responsibility

2016-04-29 Thread wren romano
> Hi Prime,
>
> Is there a chair of this committee? Herbert has been acting as such (thank 
> you!) but doesn't list himself as the chair in the initial announcement. I am 
> **in no way** trying to change any status quo and am **not** interested in 
> being chair at the moment, but I just wanted to clarify.
>
> The specific reason I ask is that Takenobu Tani recently asked about `pseq`. 
> I have no intelligent response to offer, but would want to make sure that 
> someone does offer a response. If there is a chair, that person is de facto 
> responsible that we, as a committee, communicate well, both internally and 
> externally.


I don't know that we have one (officially), but I agree that
getting/agreeing-on one should be done soon.

-- 
Live well,
~wren
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime


Re: Infrastructure & Communication

2016-04-29 Thread wren romano
On Fri, Apr 29, 2016 at 9:17 AM, Mario Blažević  wrote:
> There are two or three distinct components we need to keep track of: the
> draft standard, discussions, and potentially RFCs.
>
> Discussions can be hosted on this mailing list, on Trac, or as Git
> issues. Each of them would serve fine, but we should choose exactly one and
> stick to it. The mailing list looks pretty good in isolation, but the best
> choice depends on whether we want to have RFCs or not.
>
> If we support Requests for Comments, we'll need to also support their
> public submissions and Git pull requests or something to the same effect. In
> that case, at least the inevitable comments on RFCs would best be placed
> close to the RFCs themselves - if the RFCs end up on GitHub the discussions
> of them should be kept as GitHub issues.

I agree with all of this, and think this distinction should be kept in
mind in terms of keeping things organized to whatever tools we choose.

For general discussions I think this mailing list is best. I'm cool
for keeping irc as a side channel for hashing things out more
interactively, but it's all to easy to miss things there so I think
it's best kept as a side channel not a main one.

I like (something like) GitHub issues for tracking the exact content
of proposed changes and their (direct) commentary. As far as the
particular tool I'm mostly agnostic, but lean slightly towards github
over trac. I've never used phabricator so can't say there (though I'm
slightly against, as it'd be another tool to learn.)

As far as wiki stuff goes, to be honest I'm kinda against it. I see
how it might could be helpful as a sort of staging ground prior to
actual RFCs, or as an edited synopsis of email discussion; but in my
experience the wiki proposals for Haskell changes tend to get very
crufty and hard to follow after a few changes have been made. I think
I'd rather see specific versioning on proposals, so it can be clear
when/which parts of the proposal are retracted, amended, etc. This may
very well be a reason to prefer github, since such development can
happen in branches where we can see the iteration of changes prior to
merging a specific one into the master branch.

/me goes off to read about how Fsharp, Rust, and Go do things

-- 
Live well,
~wren
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime


Re: Haskell Prime 2020 committee

2016-04-28 Thread wren romano
On Thu, Apr 28, 2016 at 2:24 PM, Antonio Nikishaev  wrote:
>> On 28 Apr 2016, at 21:45, Richard Eisenberg  wrote:
>> Seems reasonable. I have started the page at 
>> https://wiki.haskell.org/Language/HaskellPrime
>
> Should it be under prime.haskell.org instead?
>
> (I don't seem to be able to register there though, “Username doesn't match 
> local naming policy.”)

The idea seems reasonable, but I also can't set up an account on the wiki...


-- 
Live well,
~wren
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime


Re: Language Change Management (was: [Haskell-cafe] Monad of no `return` Proposal (MRP): Moving `return` out of `Monad`)

2015-10-05 Thread wren romano
On Mon, Oct 5, 2015 at 5:23 PM, Adam Foltzer  wrote:
>>  Also I'm not sure if there would be less complaints if
>> AMP/FTP/MFP/MRP/etc as part of a new Haskell Report would be switched on all
>> at once in e.g. `base-5.0`, breaking almost *every* single package out there
>> at once.
>
> I doubt the number of complaints-per-change would be fewer, but I'm strongly
> in favor of moving away from what feels like a treadmill that doesn't value
> the time of developers and that doesn't account for the
> more-than-sum-of-parts cost of the "constant flux".

Broadly speaking, I'm a "fix it now rather than later" sort of person
in Haskell because I've seen how long things can linger before finally
getting fixed (even when everyone agrees on what the fix should be and
agrees that it should be done). However, as I mentioned in the
originating thread, I think that —at this point— when it comes to
AMP/FTP/MFP/MRP/etc we should really aim for the haskell' committee to
work out a comprehensive solution (as soon as possible), and then
enact all the changes at once when switching to
Haskell201X/base-5.0/whatevs. I understand the motivations for wanting
things to be field-tested before making it into the report, but I
don't think having a series of rapid incremental changes is the
correct approach here. Because we're dealing with the Prelude and the
core classes, the amount of breakage (and CPP used to paper over it)
here is much higher than our usual treadmill of changes; so we should
take that into account when planning how to roll the changes out.

-- 
Live well,
~wren
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime


Self-nomination

2015-09-23 Thread wren romano
Hello,

I'd like to nominate myself for the new Haskell Prime committee.

Experience-wise:

I've been active on the libraries list for something like a decade. A
couple years ago I wrote my "burning bridges" email[1], observing that
we've long known certain problems with official/standard Haskell, and
equally long known the solutions, but had been unwilling to actually
push through the communally agreed upon solutions. That email in turn
spawned the AMP[2] and the FTP[3]. The AMP has been widely praised as
an improvement. The FTP has been more controversial (since it was less
thoroughly thought through), but is still I think widely viewed in a
positive light. One thing I'd like to see/work on is fully integrating
these proposals into the standard, so that we can knock off the rough
edges (esp. re the FTP, as we've discovered numerous oversights in how
that's been implemented so far) and make things easier and more
polished for the other Haskell compilers like JHC and UHC to work
with.

Outside of Haskell per se, my work focuses on formalizing language
semantics. Some examples include my master's work on typed unification
in the weighted-logic programming language Dyna[4], my thesis work on
chiastic lambda-calculi[5] including mechanized proofs in Coq, and my
current work on the probabilistic programming language Hakaru[6]. I
also helped Lindsey and Ryan work out the semantic underpinnings of
LVars[7], though this isn't documented anywhere.


[1] 
[2] 
[3] 
[4] I just found out Wes made a github repo
 for Dyna2 (the reimplementation of Dyna1
we worked on together), though I haven't checked to see whether my
typed unification stuff is in there yet.
[5] 
[6] 
[7] 

-- 
Live well,
~wren
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime