Re: TTG: Handling Source Locations

2019-02-09 Thread Vladislav Zavialov
I wholly share this concern, which is why I commented on the Phab diff:

> Does this rely on the caller to call dL on the pattern? Very fragile, let's 
> not do that.

In addition, I'm worried about illegal states where we end up with
multiple nested levels of `NewPat`, and calling `dL` once is not
sufficient.

As to the better solution, I think we should just go with Solution B
from the Wiki page. Yes, it's somewhat more boilerplate, but it
guarantees to have locations in the right places for all nodes. The
main argument against it was that we'd have to define `type instance
XThing (GhcPass p) = SrcSpan` for many a `Thing`, but I don't see it
as a downside at all. We should do so anyway, to get rid of parsing
API annotations and put them in the AST proper.

All the best,
Vladislav

On Sat, Feb 9, 2019 at 7:19 PM Richard Eisenberg  wrote:
>
> Hi devs,
>
> I just came across [TTG: Handling Source Locations], as I was poking around 
> in RdrHsSyn and found wondrous things like (dL->L wiz waz) all over the place.
>
> General outline: 
> https://ghc.haskell.org/trac/ghc/wiki/ImplementingTreesThatGrow/HandlingSourceLocations
> Phab diff: https://phabricator.haskell.org/D5036
> Trac ticket: https://ghc.haskell.org/trac/ghc/ticket/15495
> Commit: 
> https://gitlab.haskell.org/ghc/ghc/commit/509d5be69c7507ba5d0a5f39ffd1613a59e73eea
>
> I see why this change is wanted and how the new version works.
>
> It seems to me, though, that this move makes us *less typed*. That is, it 
> would be very easy (and disastrous) to forget to match on a location node. 
> For example, I can now do this:
>
> > foo :: LPat p -> ...
> > foo (VarPat ...) = ...
>
> Note that I have declared that foo takes a located pat, but then I forgot to 
> extract the location with dL. This would type-check, but it would fail. 
> Previously, the type checker would ensure that I didn't forget to match on 
> the L constructor. This error would get caught after some poking about, 
> because foo just wouldn't work.
>
> However, worse, we might forget to *add* a location when downstream functions 
> expect one. This would be harder to detect, for two reasons:
> 1. The problem is caught at deconstruction, and figuring out where an object 
> was constructed can be quite hard.
> 2. The problem might silently cause trouble, because dL won't actually fail 
> on a node missing a location -- it just gives noSrcSpan. So the problem would 
> manifest as a subtle degradation in the quality of an error message, perhaps 
> not caught until several patches (or years!) later.
>
> So I'm uncomfortable with this direction of travel.
>
> Has this aspect of this design been brought up before? I have to say I don't 
> have a great solution to suggest. Perhaps the best I can think of is to make 
> Located a type family. It would branch on the type index to HsSyn types, 
> introducing a Located node for GhcPass but not for other types. This Isn't 
> really all that extensible (I think) and it gives special status to GHC's 
> usage of the AST. But it seems to solve the immediate problems without the 
> downside above.
>
> Sorry for reopening something that has already been debated, but (unless I'm 
> missing something) the current state of affairs seems like a potential 
> wellspring of subtle bugs.
>
> Thanks,
> Richard
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: TTG: Handling Source Locations

2019-02-12 Thread Vladislav Zavialov
> One way to think of it is this: we can now put SrcSpans where they make 
> sense, rather than everywhere.

I claim an SrcSpan makes sense everywhere, so this is not a useful
distinction. Think about it as code provenance, an AST node always
comes from somewhere: a user-written .hs file, a GHCi command, or
compiler-generated code (via TH or deriving). We should never omit
this information from a node.

And when we are writing code that consumes an AST, it always makes
sense to ask what the provenance of a node is, for example to use it
in an error message.

> this lets us add more than one; that's redundant but not harmful

It goes against the philosophy of making illegal states
irrepresentable. Now all code must be careful not to end up in an
illegal state of nested SrcSpan, without any help from the
typechecker.

The code that pattern matches on an AST, at the same time, must be
prepared to handle this case anyway (or else we risk to crash), which
it currently does with stripSrcSpanPat in the implementation of dL.

And having to remember to apply dL when matching on the AST is more
trivia to learn and remember. Not even a warning if one forgets to do
that, no appropriate place to explain this to new contributors
(reading another Note just to start doing anything at all with the
AST? unnecessary friction), and only a test failure at best in case of
a mistake.

My concrete proposal: let's just put SrcSpan in the extension fields
of each node. In other words, take these lines

type instance XVarPat  (GhcPass _) = NoExt
type instance XLazyPat (GhcPass _) = NoExt
type instance XAsPat   (GhcPass _) = NoExt
type instance XParPat  (GhcPass _) = NoExt
type instance XBangPat (GhcPass _) = NoExt
...

and replace them with

type instance XVarPat  (GhcPass _) = SrcSpan
type instance XLazyPat (GhcPass _) = SrcSpan
type instance XAsPat   (GhcPass _) = SrcSpan
type instance XParPat  (GhcPass _) = SrcSpan
type instance XBangPat (GhcPass _) = SrcSpan
...

And don't bother with the HasSrcSpan class, don't define
composeSrcSpan and decomposeSrcSpan. Very straightforward and
beneficial for both producers and consumers of an AST.

All the best,
Vladislav
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Put 'haddock' in the 'ghc' repo

2019-02-16 Thread Vladislav Zavialov
Hello devs,

There appears to be no good workflow for contributing patches that
change both GHC and Haddock.

For contributors who have push access to both repositories, it is at
least tolerable:

1. create a Haddock branch with the required changes
2. create a GHC branch with the required changes

Then wait for the GHC change to get merged to `master`, and

3a. fast-forward the Haddock change to the `ghc-head` branch
3b. in case a fast-forward is impossible, cherry-pick the commit to
`ghc-head` and push another commit to GHC `master` to update the
Haddock submodule

Roundabout, but possible.

For contributors who do not have push access to both repositories,
each step is much harder, as working with forks implies messing with
.gitmodules, which arguably should stay constant.

To avoid all this friction, I propose the following principle:

* all SCC (strongly connected components) of dependencies must go to
the same repo.

For example, since GHC depends on Haddock to build documentation, and
Haddock depends on GHC, they must go to the same repo. This way, a
single commit can update both of them in sync.

All the best,
Vladislav
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: GHC HEAD documentation once again available

2019-03-30 Thread Vladislav Zavialov
Hi Ben,

The generated Libraries page contains the following line:

   For documentation on the GHC API, see ghc-8.9.20190330/index.html

The link is dead and should be ghc-8.9/index.html instead.

All the best,
- Vlad

> On 31 Mar 2019, at 03:56, Ben Gamari  wrote:
> 
> TL;DR. A snapshot of GHC's documentation from the master branch can
>   always be found at [2].
> 
> 
> Hi everyone,
> 
> Quite a while ago I made it a habit of periodically pushing
> documentation snapshots from GHC's master branch to
> downloads.haskell.org [1]. Unfortunately, despite some attempts at
> automating this process, this frequently grew out-of-date.
> 
> I am happy to report that documentation snapshots are now generated
> as a product of GHC's CI process and made available here [2]. The old
> downloads.haskell.org URL redirects to [2] and consequently should now
> always be up-to-date.
> 
> Let me know if you notice anything amiss.
> 
> Cheers,
> 
> - Ben
> 
> 
> [1] https://downloads.haskell.org/ghc/master/
> [2] https://ghc.gitlab.haskell.org/ghc/doc/
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Parser performance: 10% regression in 8.8

2019-05-08 Thread Vladislav Zavialov
Hello ghc-devs,

This February I did some changes to the parser that require higher rank types 
support in ‘happy’. Unfortunately, as I discovered, happy’s --coerce option is 
severely broken in the presence of higher rank types, so I had to disable it. 
My benchmarks have shown a 10% slowdown from disabling --coerce 
(https://gist.github.com/int-index/38af0c5dd801088dc1de59eca4e55df4 
).

Alongside my changes I submitted a pull request to happy which fixes the issue 
(https://github.com/simonmar/happy/pull/134 
), in the hope that it would get 
merged, released, and I could re-enable --coerce in GHC ‘happy' configuration.

Unfortunately, my patch has been ignored to this day (for 3 months now), and 
the performance regression reached 8.8-alpha. We need to act swiftly if we want 
to avoid a performance regression in the actual release. Here’s what needs to 
be done:

1. Merge https://github.com/simonmar/happy/pull/134 

2. Release a new ‘happy’
3. (Optional) Specify in GHC’s build system that it builds only with the latest 
'happy' release
4. Restore the --coerce option in GHC’s build system ‘happy’ configuration
5. Backport it to the ghc-8.8 branch

I have no access to do 1 & 2, I believe Simon Marlow does. I’d appreciate if 
someone took care of 3, currently the build system does not install ‘happy’ and 
assumes a system-wide installation without checking its version. This means 
that users of all but the newly released version will encounter obscure error 
messages. We need a version check. Then I will do 4, as planned, and create a 
merge request for 5.

All the best,
- Vladislav___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: [hadrian] happy 1.19.10

2019-05-14 Thread Vladislav Zavialov
Hi Shayne,

I don’t use ’stack’ to build GHC and CI doesn’t check it, so I think I missed 
this one. Thanks for bringing this up, have you already checked locally that 
‘lts-13.21’ fixes the issue (if so, perhaps submit an MR with your fix)? Would 
it be a good idea to add a CI job for stack to avoid breakage in the future?

- Vlad

> On 15 May 2019, at 04:36, Shayne Fletcher via ghc-devs  
> wrote:
> 
> Hi Vlad,
> 
> Are there imminent plans to update hadrian/stack.yaml with something like,
> ```
> # Specifies the GHC version and set of packages available (e.g., lts-3.5, 
> nightly-2015-09-21, ghc-7.10.2)
> resolver: lts-13.21
> ```
> I think this is necessary to get the recent happy upgrade? 
> 
> -- 
> Shayne Fletcher
> Language Engineer
> c: +1 917 699 7763
> e: shayne.fletc...@daml.com
> Digital Asset Holdings, LLC
> 4 World Trade Center150 Greenwich 
> Street, 47th Floor 
> New York, NY 10007, USA
> digitalasset.com
> 
> 
> This message, and any attachments, is for the intended recipient(s) only, may 
> contain information that is privileged, confidential and/or proprietary and 
> subject to important terms and conditions available at 
> http://www.digitalasset.com/emaildisclaimer.html. If you are not the intended 
> recipient, please delete this 
> message.___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Parser performance: 10% regression in 8.8

2019-05-14 Thread Vladislav Zavialov
Steps 3–4 are done: 
https://gitlab.haskell.org/ghc/ghc/commit/684dc290563769d456b6f1c772673d64307ab072
Step 5, as it turns out, is not needed: I was mistaken and GHC 8.8 was not 
affected. I got confused about the releases, it’s only 8.10 that would be 
affected, sorry about this. However, it’s good that we merged the fix before 
the ghc-8.10 branch is cut, which should happen mid-June according to 
https://www.haskell.org/ghc/blog/20190405-ghc-8.8-status.html

Thanks everyone for responding to this, I’ve got help with CI images and 
updating build configuration.

All the best,
- Vladislav


> On 9 May 2019, at 10:35, Simon Marlow  wrote:
> 
> Thanks for bringing this up.  I've merged the PR and uploaded Happy 1.19.10 
> to Hackage.  Can someone else look at steps 3-5?
> 
> Cheers
> Simon
> 
> On Wed, 8 May 2019 at 09:51, Vladislav Zavialov  wrote:
> Hello ghc-devs,
> 
> This February I did some changes to the parser that require higher rank types 
> support in ‘happy’. Unfortunately, as I discovered, happy’s --coerce option 
> is severely broken in the presence of higher rank types, so I had to disable 
> it. My benchmarks have shown a 10% slowdown from disabling --coerce 
> (https://gist.github.com/int-index/38af0c5dd801088dc1de59eca4e55df4).
> 
> Alongside my changes I submitted a pull request to happy which fixes the 
> issue (https://github.com/simonmar/happy/pull/134), in the hope that it would 
> get merged, released, and I could re-enable --coerce in GHC ‘happy' 
> configuration.
> 
> Unfortunately, my patch has been ignored to this day (for 3 months now), and 
> the performance regression reached 8.8-alpha. We need to act swiftly if we 
> want to avoid a performance regression in the actual release. Here’s what 
> needs to be done:
> 
> 1. Merge https://github.com/simonmar/happy/pull/134
> 2. Release a new ‘happy’
> 3. (Optional) Specify in GHC’s build system that it builds only with the 
> latest 'happy' release
> 4. Restore the --coerce option in GHC’s build system ‘happy’ configuration
> 5. Backport it to the ghc-8.8 branch
> 
> I have no access to do 1 & 2, I believe Simon Marlow does. I’d appreciate if 
> someone took care of 3, currently the build system does not install ‘happy’ 
> and assumes a system-wide installation without checking its version. This 
> means that users of all but the newly released version will encounter obscure 
> error messages. We need a version check. Then I will do 4, as planned, and 
> create a merge request for 5.
> 
> All the best,
> - Vladislav
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: New implementation for `ImpredicativeTypes`

2019-09-06 Thread Vladislav Zavialov
Iavor,

Alex’s example can be well-typed if we allow first-class existentials:

  [1, ‘a’, “b”] :: [exists a. Show a => a]

This has nothing to do with the definition of lists. I believe the confusion 
was between existential types and impredicative types, as Simon has pointed out.

- Vlad

> On 6 Sep 2019, at 20:56, Iavor Diatchki  wrote:
> 
> Hello Alex,
> 
> the issue with your example is not the mapping of `show` but the list
> `[1, 'a', "b"]`.  It is not well typed simply because of how lists are
> defined.   Remember that `[1, 'a', "b"]` is not really special---it is
> just syntactic sugar for `1 : 'a' : "b" : []` and the type of `(:)`
> requires the elements to have the same type.
> 
> Of course, in principle, one could define a different list type that
> allowed values of arbitrary types to be stored in it (e.g., the
> example list would be just of type `List`).
> The issue is that you can't really use the elements of such a list as
> you wouldn't know what type they have.
> 
> Yet another option is to define a list type where the "cons" operation
> remembers the types of the elements in the type of the constructed
> list---at this point the lists become more like tuples (e.g., the
> example would be of type `List [Int,Char,String]`).   This is
> possible, but then them `map` function would have an interesting
> type...
> 
> I'd be happy to answer more questions but I don't want to side-track
> the thread as all this is quite orthogonal to impredicative types.
> 
> -Iavor
> 
> 
> 
> 
> 
> 
> 
> 
> 
> On Fri, Sep 6, 2019 at 7:21 AM Alex Rozenshteyn  wrote:
>> 
>> Hi Simon,
>> 
>> You're exactly right, of course. My example is confusing, so let me see if I 
>> can clarify.
>> 
>> What I want in the ideal is map show [1, 'a', "b"]. That is, minimal 
>> syntactic overhead to mapping a function over multiple values of distinct 
>> types that results in a homogeneous list. As the reddit thread points out, 
>> there are workarounds involving TH or wrapping each element in a constructor 
>> or using bespoke operators, but when it comes down to it, none of them 
>> actually allows me to say what I mean; the TH one is closest, but I reach 
>> for TH only in times of desperation.
>> 
>> I had thought that one of the things preventing this was lack of 
>> impredicative instantiation, but now I'm not sure. Suppose Haskell did have 
>> existentials; would map show @(exists a. Show a => a) [1, 'a', "b"] work in 
>> current Haskell and/or in quick-look?
>> 
>> Tangentially, do you have a reference for what difficulties arise in adding 
>> existentials to Haskell? I have a feeling that it would make working with 
>> GADTs more ergonomic.
>> 
>> On Fri, Sep 6, 2019 at 12:33 AM Simon Peyton Jones  
>> wrote:
>>> 
>>> I’m confused.   Char does not have the type (forall a. Show a =>a), so our 
>>> example is iill-typed in System F, never mind about type inference.  
>>> Perhaps there’s a typo?   I think you may have ment
>>> 
>>>   exists a. Show a => a
>>> 
>>> which doesn’t exist in Haskell.  You can write existentials with a data type
>>> 
>>> 
>>> 
>>> data Showable where
>>> 
>>>   S :: forall a. Show a => a -> Showable
>>> 
>>> 
>>> 
>>> Then
>>> 
>>>   map show [S 1, S ‘a’, S “b”]
>>> 
>>> works fine today (without our new stuff), provided you say
>>> 
>>> 
>>> 
>>>   instance Show Showable where
>>> 
>>> show (S x) = show x
>>> 
>>> 
>>> 
>>> Our new system can only type programs that can be written in System F.   
>>> (The tricky bit is inferring the impredicative instantiations.)
>>> 
>>> 
>>> 
>>> Simon
>>> 
>>> 
>>> 
>>> From: ghc-devs  On Behalf Of Alex Rozenshteyn
>>> Sent: 06 September 2019 03:31
>>> To: Alejandro Serrano Mena 
>>> Cc: GHC developers 
>>> Subject: Re: New implementation for `ImpredicativeTypes`
>>> 
>>> 
>>> 
>>> I didn't say anything when you were requesting use cases, so I have no 
>>> right to complain, but I'm still a little disappointed that this doesn't 
>>> fix my (admittedly very minor) issue: 
>>> https://www.reddit.com/r/haskell/comments/3am0qa/existentials_and_the_heterogenous_list_fallacy/csdwlp2/?context=8&depth=9
>>> 
>>> 
>>> 
>>> For those who don't want to click on the reddit link: I would like to be 
>>> able to write something like map show ([1, 'a', "b"] :: [forall a. Show a 
>>> => a]), and have it work.
>>> 
>>> 
>>> 
>>> On Wed, Sep 4, 2019 at 8:13 AM Alejandro Serrano Mena  
>>> wrote:
>>> 
>>> Hi all,
>>> 
>>> As I mentioned some time ago, we have been busy working on a new 
>>> implementation of `ImpredicativeTypes` for GHC. I am very thankful to 
>>> everybody who back then sent us examples of impredicativity which would be 
>>> nice to support, as far as we know this branch supports all of them! :)
>>> 
>>> 
>>> 
>>> If you want to try it, at 
>>> https://gitlab.haskell.org/trupill/ghc/commit/a3f95a0fe0f647702fd7225fa719a8062a4cc0a5/pipelines?ref=quick-look-build
>>>  you can find the result of the pipeline, 

Re: New implementation for `ImpredicativeTypes`

2019-09-06 Thread Vladislav Zavialov
No, I don’t expect the compiler to infer existential quantification, just like 
it doesn’t infer higher-rank universal quantification. However, I believe we 
could check terms against user-written types that contain existentials.

- Vlad

> On 6 Sep 2019, at 23:48, Iavor Diatchki  wrote:
> 
> Why would you infer this type as opposed to `[exists a. a]`?
> 
> On Fri, Sep 6, 2019 at 12:08 PM Vladislav Zavialov
>  wrote:
>> 
>> Iavor,
>> 
>> Alex’s example can be well-typed if we allow first-class existentials:
>> 
>>  [1, ‘a’, “b”] :: [exists a. Show a => a]
>> 
>> This has nothing to do with the definition of lists. I believe the confusion 
>> was between existential types and impredicative types, as Simon has pointed 
>> out.
>> 
>> - Vlad

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: GHC 8.10.1 Release Plan

2019-09-19 Thread Vladislav Zavialov
Hi Ben,

Standalone kind signatures are implemented and waiting for review and merge for 
GHC 8.10

https://gitlab.haskell.org/ghc/ghc/merge_requests/1438

- Vlad

> On 18 Sep 2019, at 23:07, Ben Gamari  wrote:
> 
> tl;dr. If you have unmerged work that you would like to be in GHC 8.10 please
>   reply to this email and submit it for review in the next couple
>   of weeks.

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Handling source locations in HsSyn via TTG

2019-10-28 Thread Vladislav Zavialov
I care about this, and I maintain my viewpoint described in 
https://mail.haskell.org/pipermail/ghc-devs/2019-February/017080.html

I’m willing to implement this.

As to merge request !1970, it isn’t good to special-case GhcPass in a closed 
type family, making other tools second-class citizens. Let’s say I have 
`MyToolPass`, how would I write an instance of `WrapL` for it?

- Vlad

> On 28 Oct 2019, at 12:31, Simon Peyton Jones via ghc-devs 
>  wrote:
> 
> Friends
> 
> As you know
> 
>   • We are trying to use “Trees That Grow” (TTG) to move HsSyn towards a 
> situation in which GHC is merely a client of a generic HsSyn data type that 
> can be used by clients other than GHC.
>   • One sticking point has been the question of attaching source 
> locations.  We used to have a “ping-pong” style, in which very node is 
> decorated with a source location, but that’s a bit more awkward in TTG.
>   • This wiki page outlines some choices, while ticket #15495 has a lot 
> of discussion.
>   • HEAD embodies Solution A.  But it has the disadvantage that the type 
> system doesn’t enforce locations to be present at all.   That has undesirable 
> consequences (eg ticket #17330)
>   • The proposal is to move to Solution D on that page; you can see how 
> it plays out in MR !1970.
>   • (I think Solutions B and C are non-starters by comparison.)
> If you care, please check out the design and the MR.   We can change later, 
> of course, but doing so changes a lot of code, including client code, so we’d 
> prefer not to.
> 
> Let’s try to converge by the end of the week.
> 
> Thanks
> 
> Simon
> 
>  
> 
>  
> 
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Handling source locations in HsSyn via TTG

2019-10-28 Thread Vladislav Zavialov
> Note that the MR description is a little misleading and I should update it: 
> I'm using an open type family, really. 

Ah, that’s good to know. In this case, I’m in support.

- Vlad

> On 28 Oct 2019, at 13:13, Sebastian Graf  wrote:
> 
> Hi Vlad,
> 
> Note that the MR description is a little misleading and I should update it: 
> I'm using an open type family, really. See the section for solution D on the 
> wiki page that shows how to extend the approach to Haddock (which needs 
> SrcLocs, too).
> If I understand correctly, you're advocating solution B. If you can think of 
> any more Pros and Cons (comparing to solution D, in particular), feel free to 
> edit the wiki page.
> 
> Sebastian
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Handling source locations in HsSyn via TTG

2019-10-28 Thread Vladislav Zavialov
> Are you arguing for Solution D?  Or are you proposing some new solution E?  I 
> can't tell.

I suspect that I’m arguing for Solution B, but it’s hard to tell because it’s 
not described in enough detail in the Wiki.

> Easy
> 
>   type instance WrapL ToolPass t = ...
> 
> What am I missing?


This assumes that `WrapL` is an open type family. In this case, there’s no 
problem. The merge request description has the following definition of WrapL:

type instance WrapL p (f :: * -> *) where
  WrapL (GhcPass p) f = Located (f (GhcPass p))
  WrapL p   f =  f p
type LPat p = WrapL p Pat

That wouldn’t be extensible. However, if WrapL is open, then Solution D sounds 
good to me.

- Vlad 

> On 28 Oct 2019, at 13:20, Simon Peyton Jones  wrote:
> 
> Vlad
> 
> Are you arguing for Solution D?  Or are you proposing some new solution E?  I 
> can't tell.
> 
> 
> | As to merge request !1970, it isn’t good to special-case GhcPass in a
> | closed type family, making other tools second-class citizens. Let’s say I
> | have `MyToolPass`, how would I write an instance of `WrapL` for it?
> 
> Easy
> 
>   type instance WrapL ToolPass t = ...
> 
> What am I missing?
> 
> Simon
> 
> | -Original Message-
> | From: Vladislav Zavialov 
> | Sent: 28 October 2019 10:07
> | To: Simon Peyton Jones 
> | Cc: ghc-devs@haskell.org
> | Subject: Re: Handling source locations in HsSyn via TTG
> | 
> | I care about this, and I maintain my viewpoint described in
> | https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmail.has
> | kell.org%2Fpipermail%2Fghc-devs%2F2019-
> | February%2F017080.html&data=02%7C01%7Csimonpj%40microsoft.com%7C06c859
> | d8bc0f48c73cb208d75b8e895c%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C63
> | 7078540055975603&sdata=FhkqfWGXaNX%2Fz4IiCcvoVCyzVsSAlyz6Y1dxEGUjX9I%3
> | D&reserved=0
> | 
> | I’m willing to implement this.
> | 
> | As to merge request !1970, it isn’t good to special-case GhcPass in a
> | closed type family, making other tools second-class citizens. Let’s say I
> | have `MyToolPass`, how would I write an instance of `WrapL` for it?
> | 
> | - Vlad
> | 
> | > On 28 Oct 2019, at 12:31, Simon Peyton Jones via ghc-devs  | d...@haskell.org> wrote:
> | >
> | > Friends
> | >
> | > As you know
> | >
> | >   • We are trying to use “Trees That Grow” (TTG) to move HsSyn towards
> | a situation in which GHC is merely a client of a generic HsSyn data type
> | that can be used by clients other than GHC.
> | >   • One sticking point has been the question of attaching source
> | locations.  We used to have a “ping-pong” style, in which very node is
> | decorated with a source location, but that’s a bit more awkward in TTG.
> | >   • This wiki page outlines some choices, while ticket #15495 has a
> | lot of discussion.
> | >   • HEAD embodies Solution A.  But it has the disadvantage that the
> | type system doesn’t enforce locations to be present at all.   That has
> | undesirable consequences (eg ticket #17330)
> | >   • The proposal is to move to Solution D on that page; you can see
> | how it plays out in MR !1970.
> | >   • (I think Solutions B and C are non-starters by comparison.)
> | > If you care, please check out the design and the MR.   We can change
> | later, of course, but doing so changes a lot of code, including client
> | code, so we’d prefer not to.
> | >
> | > Let’s try to converge by the end of the week.
> | >
> | > Thanks
> | >
> | > Simon
> | >
> | >
> | >
> | >
> | >
> | > ___
> | > ghc-devs mailing list
> | > ghc-devs@haskell.org
> | >
> | https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.hask
> | ell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc-
> | devs&data=02%7C01%7Csimonpj%40microsoft.com%7C06c859d8bc0f48c73cb208d7
> | 5b8e895c%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637078540055975603&a
> | mp;sdata=CRYEYmjuAYYIhoAeTPbi%2FctCiWmkIjL6pKUTBgVBwo8%3D&reserved=0
> 

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: re-engineering overloading and rebindable syntax

2019-12-05 Thread Vladislav Zavialov
I find this idea attractive. Could we desugar `do` in the same manner using 
this SrcSpan trick? Could we desugar infix operators `a + b` to `(+) a b`? We 
just need to store in theh SrcSpan that the (+) was actually infix. 
Implementing as much syntactic sugar as possible this way would let us push 
complexity out of the type checker. 

- Vlad

> On 5 Dec 2019, at 12:53, Richard Eisenberg  wrote:
> 
> Hi devs,
> 
> How can we mitigate this? By expanding the possibilities of a SrcSpan. A 
> SrcSpan is really a description of the provenance of a piece of AST. Right 
> now, it considers two possibilities: that the code came from a specific 
> stretch of rows and columns in an input file, or that the code came from 
> elsewhere. Instead, we can expand (and perhaps rename) SrcSpan to include 
> more possibilities. In support of my idea above, we could now have a SrcSpan 
> that says some AST came from overloading. Such code is suppressed by default 
> when pretty-printing. Thus, `fromString "blah"` could render as just `"blah"` 
> (what the user wrote), if fromString was inserted by the translation pass 
> described above. We can have a separate new SrcSpan that says that AST was 
> written by translation from some original AST. That way, `ifThenElse a b c` 
> can be printed as `if a then b else c`, if the former were translated from 
> the latter. Though it's beyond my use-case above, we can also imagine a new 
> SrcSpans that refer to a Template Haskell splice or a quasiquote.
> 
> What do we think? Is this a worthwhile direction of travel? I think the end 
> result would be both a cleaner implementation of overloading and rebindable 
> syntax *and* more informative and useful source-code provenances.
> 
> Richard
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: DataCon tag value convention

2020-02-12 Thread Vladislav Zavialov
The globally unique tag for data constructors already exists, it’s a pointer to 
the StgInfoTable. You can observe it using getClosureRaw from GHC.HeapView

Example:

Prelude GHC.HeapView> getClosureRaw Nothing
(0x000107fd8e38,[4429024840,4429024752],[])

Here, the globally unique tag is 0x000107fd8e38.

Note that newtype constructors do not get their own tag because newtypes 
guarantee that they do not change the underlying representation of data.

I’ve discovered the existence of such a tag only recently (Alexander Vershilov 
pointed it out to me), so I cannot say if it’s a reliable way to identify data 
constructors. For example, it is definitely not stable across several runs of 
the same binary. However, within a single run, it seems to work.

- Vlad

> On 12 Feb 2020, at 21:57, Csaba Hruska  wrote:
> 
> Hello,
> 
> In theory could GHC codegen work if every data constructor in the whole 
> program  have a globally unique tag value instead of starting from 1 for each 
> algebraic data type?
> Would this break any GHC design decision?
> 
> Regards,
> Csaba
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Using a development snapshot of happy

2020-08-02 Thread Vladislav Zavialov
Hi ghc-devs,

I’m working on the unification of parsers for terms and types, and one of the 
things I’d really like to make use of is a feature I implemented in ‘happy’ in 
October 2019 (9 months ago):

  https://github.com/simonmar/happy/pull/153

It’s been merged upstream, but there has been no release of ‘happy’, despite 
repeated requests:

  1. I asked for a release in December: 
https://github.com/simonmar/happy/issues/164
  2. Ben asked for a release a month ago: 
https://github.com/simonmar/happy/issues/168

I see two solutions here:

  a) Find a co-maintainer for ‘happy’ who could make releases more frequently 
(I understand the current maintainers probably don’t have the time to do it).
  b) Use a development snapshot of ‘happy’ in GHC

Maybe we need to do both, but one reason I’d like to see (b) in particular 
happen is that I can imagine introducing more features to ‘happy’ for use in 
GHC, and it’d be nice not to wait for a release every time. For instance, there 
are some changes I’d like to make to happy/alex in order to implement #17750

So here are two questions I have:

  1. Are there any objections to this idea?
  2. If not, could someone more familiar with the build process guide me as to 
how this should be implemented? Do I add ‘happy’ as a submodule and change 
something in the ./configure script, or is there more to it? Do I need to 
modify make/hadrian, and if so, then how?

Thanks,
- Vlad
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Hi. I'm new to this mailing list and have a few questions.

2020-08-04 Thread Vladislav Zavialov
This feature has already been proposed:

  https://github.com/ghc-proposals/ghc-proposals/pull/196

But the discussion there has stalled. You may want to take a look at the 
existing discussion, and if you see a way forward, comment on the proposal (or 
open a competing one).

- Vlad

> On 4 Aug 2020, at 20:45, Anselm Schüler (conversations subemail) 
>  wrote:
> 
> Thank you for the nice introduction :) !
> I will check out the GHC proposals site.
> And following Simon’s (I hope addressing with first name is OK) suggestion, 
> I’m going to give an outline of the idea.
>  
> The idea is to extend type application syntax to enable explicit assignment 
> of types to specific type variables.
> For instance, say I have f :: forall a b. (a, b) -> (b, a), and I want to 
> apply the type [String] to it. My only option is to do
> f @([String]) :: forall b. ([String], b) -> (b, [String]) 
> —but what if, instead, I want a function of type forall a. (a, [String]) -> 
> ([String], a)?
> I propose the following syntax:
> f @{b = [String]} :: forall a. ([String], b) -> (b, [String])
> This wouldn’t break any existing programs since using record syntax here is 
> already disallowed and met with an error message.
> A question is of course the symbol used for assignment (~, =, ::, or ->?).
>  
> I hope the code shows up as a monospace font on your end. I used the IBM Plex 
> Mono font, which is open-source.
>  
> Anselm Schüler
> www.anselmschueler.com
> m...@anselmschueler.com
>  
> From: Simon Peyton Jones
> Sent: Tuesday, August 4, 2020 18:44
> To: Richard Eisenberg; "Anselm Schüler (conversations subemail)"
> Cc: ghc-devs@haskell.org
> Subject: RE: Hi. I'm new to this mailing list and have a few questions.
>  
> Welcome Anselm.  ghc-devs is a very informal mailing list, and we welcome 
> newcomers.
>  
> For example, I have a feature idea in the back of my mind, which I imagine 
> would be easy to implement
>  
> What Richard says is right, but you should feel free to fly the kite on this 
> list if you want – or on Haskell Café – to get some idea of whether others 
> seem warm about the idea, before writing a full proposal.  
>  
> Simon
>  
> From: ghc-devs  On Behalf Of Richard Eisenberg
> Sent: 04 August 2020 16:05
> To: "Anselm Schüler (conversations subemail)" 
> 
> Cc: ghc-devs@haskell.org
> Subject: Re: Hi. I'm new to this mailing list and have a few questions.
>  
> Hi Anselm,
>  
> Welcome!
>  
> A good way of getting used to a list like this one is to wait a little while 
> and observe what kind of messages others send; this will give you a feel for 
> how the list is used. If you're impatient, you can also check out the 
> archives at https://mail.haskell.org/pipermail/ghc-devs/.
>  
> As for a feature request: if your feature changes the language GHC accepts 
> (most do), the right place to post is at 
> https://github.com/ghc-proposals/ghc-proposals. There is a description of how 
> to proceed on that page. Proposals submitted there get debated within the 
> community and then eventually sent to a GHC Steering Committee for a vote on 
> acceptance or rejection. Then, we worry about implementing it. If you have a 
> suggestion that does not change the language GHC accepts, you can post an 
> Issue at https://gitlab.haskell.org/ghc/ghc/.
>  
> I hope this is helpful!
> Richard
>  
> 
> On Aug 4, 2020, at 8:59 AM, Anselm Schüler (conversations subemail) 
>  wrote:
>  
> First of all, in general, I’m new to mailing lists (as used for discussions) 
> in general, so a question about that:
> When subscribed to the mailing list, do you get every message, or are some 
> discussions hidden?
>  
> Second of all, I’d like to know what kinds of messages are appropriate here. 
> I’m not familiar with coding compilers or anything of the like, so I’m 
> somewhat afraid of offering unhelpful comments or being just woefully 
> underqualified to participate here.
> For example, I have a feature idea in the back of my mind, which I imagine 
> would be easy to implement (that might be wrong). Is it alright if I submit 
> that here or should I use some other forum?
>  
> Thank you in advance for the answers.
>  
> Anselm Schüler
> www.anselmschueler.com
> m...@anselmschueler.com
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>  
>  
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Parsing funny arrows

2020-08-28 Thread Vladislav Zavialov
Hi Csongor,

I believe the reason for this failure is that  a -> @m b  gets parsed as  a -> 
@(m b).
Why is that? Because a ‘btype’ includes type-level application.

If you replace the ‘btype’ after PREFIX_AT with an ‘atype’, this particular 
issue should go away. At least that’s my hypothesis, I haven’t tested it.

- Vlad

> On 29 Aug 2020, at 01:32, Csongor Kiss  wrote:
> 
> Hello devs,
> 
> I am trying to modify GHC's parser to allow the following syntax in types:
> 
>   a -> @m b
> 
> but my naive attempt was unsuccessful:
> 
> type :: { LHsType GhcPs }
> : btype{ $1 }
> | btype '->' PREFIX_AT btype ctype  ...
> 
> For example when I try to parse the following code (and turn on the lexer 
> debug log):
>   
>   test :: a -> @m b
>   test = undefined
> 
> I get the following 
> 
> token: ITvarid "test"
> token: ITdcolon NormalSyntax
> token: ITvarid "a"
> token: ITrarrow NormalSyntax
> token: ITtypeApp
> token: ITvarid "m"
> token: ITvarid "b"
> token: ITsemi
> 
> Parse.hs:2:1: error:
> parse error (possibly incorrect indentation or mismatched brackets)
>   |
> 2 | test = undefined
> 
> 
> I don't have much experience with hacking on the parser so I'm likely missing 
> something obvious.
> Could someone please point at what I might be doing wrong?
> 
> Thanks in advance.
> 
> Cheers,
> Csongor
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Parsing funny arrows

2020-08-29 Thread Vladislav Zavialov
The lexer produces only as many tokens as the parser requires. In the 
‘lexerDbg’ dump that you included in the message, there were these lines:

  token: ITvarid "m"
  token: ITvarid "b"
  token: ITsemi

So I knew that the parser consumed the entire string, up to the virtual 
semicolon. I also recognized the parse error as the one produced by ‘happy’ 
rather than by a later validation pass. So even though the parser consumed the 
entire string, it failed. Therefore, it didn’t expect this string to end so 
abruptly, it expected it to continue.

But what did it expect to find? To figure it out, we need to know which grammar 
production is involved in the failure. The only grammar production that 
could’ve consumed the ‘->’ PREFIX_AT sequence successfully and proceed to 
process the rest of the string is this one:

   btype '->' PREFIX_AT btype ctype

By inspecting the definitions of ‘btype’ and ‘ctype’, one can see that neither 
of those accept the empty string, and both of those accept type-level function 
application. Thus it’s possible that ‘btype’ consumed “m b” as an application, 
and ‘ctype’ failed because it didn’t accept the remaining empty string:

  btype = “m b”
  ctype = parse error (nothing to consume)

But what you wanted instead was:

  btype = “m”
  ctype = “b”

The solution is to use ‘atype’ instead of ‘btype’, as ‘atype’ does not accept 
type-level application.

By the way, there’s also the input string on which the original grammar 
would’ve succeeded (or at least I think so):

  test :: a -> @m forall b. b
  test = undefined

That’s because ‘btype’ wouldn’t have consumed the ‘forall’, it would’ve stopped 
at this point. And then ‘ctype’ could’ve consumed “forall b. b”.

I don’t think there’s a parser equivalent of -ddump-tc-trace. You’ll need to 
figure this stuff out by reading the grammar and keeping in mind that ‘happy’ 
generates a shift-reduce parser that does not backtrack. The ‘lexerDbg’ output 
is useful to see how far the parser got. And there’s also this command in case 
you want to go low level and inspect the state machine generated by ‘happy’:

happy -agc --strict compiler/GHC/Parser.y -idetailed-info

Hope this helps,
- Vlad

> On 29 Aug 2020, at 10:16, Csongor Kiss  wrote:
> 
> Thanks a lot Vlad and Shayne, that indeed did the trick!
> 
> Out of curiosity, how could I have figured out that this was the culprit? The 
> parse
> error I got was a bit puzzling, and I couldn't find any flags that would give 
> more information
> (I think I was looking for the parser equivalent of -ddump-tc-trace).
> 
> Best,
> Csongor

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Parsing funny arrows

2020-08-29 Thread Vladislav Zavialov
Hi Brandon, I’m afraid your analysis is not entirely correct. The shift/reduce 
conflict is not on @ but after it.

- Vlad

> On 29 Aug 2020, at 14:42, Brandon Allbery  wrote:
> 
> Another way to figure it out is the shift/reduce conflict on @, which tells 
> you it had two ways to recognize it. "Reduce" here means returning to your 
> parser rule, so "shift" means btype wanted to recognize the @. Inspecting 
> btype would then have shown that it was looking for a type application.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: HsPragTick

2020-08-31 Thread Vladislav Zavialov
I was under impression it was somehow related to HPC. Since I'm not
sufficiently familiar with HPC's inner workings, I kept it around just to
be safe.

- Vlad

On Mon, Aug 31, 2020, 17:03 Ryan Scott  wrote:

> I think that HsPragTick is unused as of [1]. In fact, I was under the
> impression that [1] removed HsPragTick entirely (as the commit message)
> would suggest, but upon further inspection, that doesn't appear to be the
> case. Vlad, do you recall why HsPragTick was kept around?
>
> Ryan S.
> -
> [1] https://gitlab.haskell.org/ghc/ghc/-/merge_requests/2154
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Use of forall as a sigil

2020-12-03 Thread Vladislav Zavialov
There is no *implicit* universal quantification in that example, but there
is an explicit quantifier. It is written as follows:

  forall a ->

which is entirely analogous to:

  forall a.

in all ways other than the additional requirement to instantiate the type
vatiable visibly at use sites.

- Vlad


On Thu, Dec 3, 2020, 19:12 Bryan Richter  wrote:

> I must be confused, because it sounds like you are contradicting yourself.
> :) In one sentence you say that there is no assumed universal
> quantification going on, and in the next you say that the function does
> indeed work for all types. Isn't that the definition of universal
> quantification?
>
> (We're definitely getting somewhere interesting!)
>
> Den tors 3 dec. 2020 17:56Richard Eisenberg  skrev:
>
>>
>>
>> On Dec 3, 2020, at 10:23 AM, Bryan Richter  wrote:
>>
>> Consider `forall a -> a -> a`. There's still an implicit universal
>> quantification that is assumed, right?
>>
>>
>> No, there isn't, and I think this is the central point of confusion. A
>> function of type `forall a -> a -> a` does work for all types `a`. So I
>> think the keyword is appropriate. The only difference is that we must state
>> what `a` is explicitly. I thus respectfully disagree with
>>
>> But somewhere, an author decided to reuse the same keyword to herald a
>> type argument. It seems they stopped thinking about the meaning of the word
>> itself, saw that it was syntactically in the right spot, and borrowed it to
>> mean something else.
>>
>>
>> Does this help clarify? And if it does, is there a place you can direct
>> us to where the point could be made more clearly? I think you're far from
>> the only one who has tripped here.
>>
>> Richard
>>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Using a development snapshot of happy

2020-12-03 Thread Vladislav Zavialov
FWIW I have a parser-generator implementation here
https://github.com/simonmar/happy/pull/170

On Fri, Dec 4, 2020, 06:35 John Ericson 
wrote:

> Seeing https://gitlab.haskell.org/ghc/ghc/-/merge_requests/4560 stuck on
> needing a new version of Alex reminded me of this.
>
> Ben raises a good point on Happy bootstrapping from itself making this a
> pain, but I'd hope we could just get around this by vendoring the generated
> happy parser in the happy repo. In fact their two ways to do this:
>
> - as a permanent change, in which case we'd want to write a script to
> update the vendor instead of the custom sdist that is is the point of the
> Makefile.
>
> - in a separate "master-sdist" branch of generated sdists, which GHC would
> track with the submodule instead of master.
>
> While I'm rarely for vendoring generated code, breaking a bootstrap cycle
> is a pretty darn good reason. Also this is rather benign case of bootstrap
> artifact vendoring as:
>
>  - Generated happy code is a lot easier to understand than machine code at
> scale
>
>  - Ken Thompson attacks on the parser scare me less than elsewhere in the
> compiler
>
> Finally If all that is still too ugly, well, it would be nice to have a
> parser-combinator implementation of the same functionality that can double
> as a oracle for testing.
>
> Cheers,
>
> John
> On 8/4/20 1:21 PM, Ben Gamari wrote:
>
> Vladislav Zavialov   writes:
>
>
> Hi ghc-devs,
>
> I’m working on the unification of parsers for terms and types, and one
> of the things I’d really like to make use of is a feature I
> implemented in ‘happy’ in October 2019 (9 months ago):
>
>   https://github.com/simonmar/happy/pull/153
>
> It’s been merged upstream, but there has been no release of ‘happy’,
> despite repeated requests:
>
>   1. I asked for a release in December: 
> https://github.com/simonmar/happy/issues/164
>   2. Ben asked for a release a month ago: 
> https://github.com/simonmar/happy/issues/168
>
> I see two solutions here:
>
>   a) Find a co-maintainer for ‘happy’ who could make releases more
>   frequently (I understand the current maintainers probably don’t have
>   the time to do it).
>   b) Use a development snapshot of ‘happy’ in GHC
>
> Maybe we need to do both, but one reason I’d like to see (b) in
> particular happen is that I can imagine introducing more features to
> ‘happy’ for use in GHC, and it’d be nice not to wait for a release
> every time. For instance, there are some changes I’d like to make to
> happy/alex in order to implement #17750
>
> So here are two questions I have:
>
>   1. Are there any objections to this idea?
>
>
> I'm not entirely keen on the idea: while the cost of the submodule
> itself is pretty low (happy is a small package which takes little time
> to build), I am skeptical of addressing social issues like happy's lack
> of maintenance with technical solutions. Ultimately, shipping happy as a
> submodule would merely kick the current problem down the road:
> eventually (when we release GHC) we will need a happy release. Unless
> the underlying maintainership problem is addressed we will end up right
> back where we are today.
>
> Moreover, note that happy requires happy as a build dependency so we won't be
> able to drop it as a build dependency of GHC even if we do include it as
> a submodule.
>
> For this reason, I would weakly prefer that we first find a maintainer
> and try to get a release out before jumping straight to adding happy as
> a submodule. I will try to bring up the matter with Simon Marlow to see
> if we can't find a solution here.
>
>
>   2. If not, could someone more familiar with the build process guide
>   me as to how this should be implemented? Do I add ‘happy’ as a
>   submodule and change something in the ./configure script, or is
>   there more to it? Do I need to modify make/hadrian, and if so, then
>   how?
>
>
> It will be a tad more involved than this. We will need to teach the
> build systems to build Happy, use the configure executable, and update
> the source distribution packaging rules to include the new submodule.
> Moreover, happy (unfortunately) has a make-based build system which will
> need to be used to generate its parser.
>
> Updating the build systems likely won't be difficult, but there isn't clear
> documentation on what it will involve. This will really be a matter of
> finding a similar existing case (e.g. genprimops, perhaps?), following
> it as a model, and figuring out how to fill any gaps.
>
> Moreover, build system logic is inevitably a bug-nest; adding the same
> logic twice greatly increases the chance that 

Re: What's the modern way to apply a polymorphic function to a Dynamic value in GHC 8.8 and onwards?

2021-04-12 Thread Vladislav Zavialov
Would something like this work for you?

  import Type.Reflection
  import Data.Dynamic

  apD :: Typeable f => (forall a. a -> f a) -> Dynamic -> Dynamic
  apD f (Dynamic t a) = withTypeable t $ Dynamic typeRep (f a)

- Vlad

> On 12 Apr 2021, at 14:34, YueCompl via ghc-devs  wrote:
> 
> Dear Cafe and GHC devs,
> 
> 
> There used to be a "principled way with pattern match on the constructor":
> 
> ```hs
> data Dynamic where
>  Dynamic :: Typeable a => a -> Dynamic
> 
> apD :: Typeable f => (forall a. a -> f a) -> Dynamic -> Dynamic
> apD f (Dynamic a) = Dynamic $ f a
> ```
> Source: 
> https://www.reddit.com/r/haskell/comments/2kdcca/q_how_to_apply_a_polymorphic_function_to_a/
> 
> 
> But now with GHC 8.8 as in my case, `Dynamic` constructor has changed its 
> signature to: 
> 
> ```hs
> Dynamic :: forall a. TypeRep a -> a -> Dynamic
> ```
> 
> Which renders the `apD` not working anymore. 
> 
> 
> And it seems missing dependencies now for an older solution Edward KMETT 
> provides:
> 
> ```hs
> apD :: forall f. Typeable1 f => (forall a. a -> f a) -> Dynamic -> Dynamic
> apD f a = dynApp df a
>  where t = dynTypeRep a
>df = reify (mkFunTy t (typeOf1 (undefined :: f ()) `mkAppTy` t)) $ 
>  \(_ :: Proxy s) -> toDyn (WithRep f :: WithRep s (() -> f 
> ()))
> ```
> Source: 
> https://stackoverflow.com/questions/10889682/how-to-apply-a-polymorphic-function-to-a-dynamic-value
> 
> 
> So, how can I do that nowadays?
> 
> Thanks,
> Compl
> 
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: GitLab downtime

2021-05-31 Thread Vladislav Zavialov
It is currently down with a 502 error.

- Vlad

> On 1 Jun 2021, at 04:01, Ben Gamari  wrote:
> 
> Hi all,
> 
> I believe gitlab.haskell.org should be back up at this point. There are
> still a few ancillary services (e.g. grafana.gitlab.haskell.org) that I
> haven't yet validated but every critical should be functional. Do let me
> know if you find anything amiss.
> 
> Cheers,
> 
> - Ben
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Anyone ever wondered what our favourite merge bot is up to?

2021-06-10 Thread Vladislav Zavialov
That’s going to be very helpful, thank you! What do you think of adding this 
link to https://gitlab.haskell.org/marge-bot? This way it’ll be more 
discoverable.

- Vlad

> On 9 Jun 2021, at 13:29, Matthew Pickering  
> wrote:
> 
> Hi all,
> 
> I added a new dashboard on grafana which exposes some of the systemd
> logs from marge-bot.
> 
> https://grafana.gitlab.haskell.org/d/iiCppweMz/marge-bot?orgId=2&refresh=5m
> 
> You can use the logs to see why marge doesn't want to merge your MR or
> whether she is dead.
> 
> Cheers,
> 
> Matt
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Optics?

2021-10-03 Thread Vladislav Zavialov
Hi Alan,

Your pair of functions can be packaged up as a single function, so that

getEpa :: a -> EpaLocation
setEpa :: a -> EpaLocation -> a

becomes

lensEpa :: forall f. Functor f => (EpaLocation -> f EpaLocation) -> (a 
-> f a)  

And the get/set parts can be recovered by instantiating `f` to either Identity 
or Const.

The nice thing about lenses is that they compose, so that if you need nested 
access, you could define several lenses, compose them together, and then reach 
deep into a data structure. Then lenses might offer some simplification. 
Otherwise, an ordinary getter/setter pair is just as good.

- Vlad

> On 3 Oct 2021, at 20:40, Alan & Kim Zimmerman  wrote:
> 
> Hi all
> 
> I am working on a variant of the exact printer which updates the annotation 
> locations from the `EpaSpan` version to the `EpaDelta` version, as the 
> printing happens
> 
> data EpaLocation = EpaSpan RealSrcSpan
>  | EpaDelta DeltaPos
> 
> The function doing the work is this
> 
> markAnnKw :: (Monad m, Monoid w)
>   => EpAnn a -> (a -> EpaLocation) -> (a -> EpaLocation -> a) -> AnnKeywordId 
> -> EP w m (EpAnn a)
> 
> which gets an annotation, a function to pull a specific location out, and one 
> to update it.
> 
> I do not know much about lenses, but have a feeling that I could simplify 
> things by using one.
> 
> Can anyone give me any pointers?
> 
> Alan
> 
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Case split uncovered patterns in warnings or not?

2021-11-10 Thread Vladislav Zavialov
Integer is an interesting example. I think it reveals another issue: 
exhaustiveness checking should account for abstract data types. If the 
constructors are not exported, do not case split.

- Vlad

> On 10 Nov 2021, at 12:48, Oleg Grenrus  wrote:
> 
> It should not. Not even when forced.
> 
> I have seen an `Integer` constructors presented to me, for example:
> 
> module Ex where
> 
> foo :: Bool -> Integer -> Integer
> foo True i = i
> 
> With GHC-8.8 the warning is good:
> 
> % ghci-8.8.4 -Wall Ex.hs 
> GHCi, version 8.8.4: https://www.haskell.org/ghc/  :? for help
> Loaded GHCi configuration from /home/phadej/.ghci
> [1 of 1] Compiling Ex   ( Ex.hs, interpreted )
> 
> Ex.hs:4:1: warning: [-Wincomplete-patterns]
> Pattern match(es) are non-exhaustive
> In an equation for ‘foo’: Patterns not matched: False _
>   |
> 4 | foo True i = i
>   | ^^
> 
> With GHC-8.10 is straight up awful.
> I'm glad I don't have to explain it to any beginner,
> or person who don't know how Integer is implemented.
> (9.2 is about as bad too).
> 
> % ghci-8.10.4 -Wall Ex.hs
> GHCi, version 8.10.4: https://www.haskell.org/ghc/  :? for help
> Loaded GHCi configuration from /home/phadej/.ghci
> [1 of 1] Compiling Ex   ( Ex.hs, interpreted )
> 
> Ex.hs:4:1: warning: [-Wincomplete-patterns]
> Pattern match(es) are non-exhaustive
> In an equation for ‘foo’:
> Patterns not matched:
> False (integer-gmp-1.0.3.0:GHC.Integer.Type.S# _)
> False (integer-gmp-1.0.3.0:GHC.Integer.Type.Jp# _)
> False (integer-gmp-1.0.3.0:GHC.Integer.Type.Jn# _)
>   |
> 4 | foo True i = i
>   | ^^^
> 
> - Oleg
> 
> 
> On 9.11.2021 15.17, Sebastian Graf wrote:
>> Hi Devs,
>> 
>> In https://gitlab.haskell.org/ghc/ghc/-/issues/20642 we saw that GHC >= 8.10 
>> outputs pattern match warnings a little differently than it used to. Example 
>> from there:
>> 
>> {-# OPTIONS_GHC -Wincomplete-uni-patterns #-}
>> 
>> foo :: [a] -> [a]
>> foo [] = []
>> foo xs = ys
>>   where
>>   (_, ys@(_:_)) = splitAt 0 xs
>> 
>> main :: IO ()
>> main = putStrLn $ foo $ "Hello, coverage checker!"
>> Instead of saying
>> 
>> 
>> 
>> ListPair.hs:7:3: warning: [-Wincomplete-uni-patterns]
>> Pattern match(es) are non-exhaustive
>> In a pattern binding: Patterns not matched: (_, [])
>> 
>> 
>> 
>> We now say
>> 
>> 
>> 
>> ListPair.hs:7:3: warning: [-Wincomplete-uni-patterns]
>> Pattern match(es) are non-exhaustive
>> In a pattern binding:
>> Patterns of type ‘([a], [a])’ not matched:
>> ([], [])
>> ((_:_), [])
>> 
>> 
>> 
>> E.g., newer versions do (one) case split on pattern variables that haven't 
>> even been scrutinised by the pattern match. That amounts to quantitatively 
>> more pattern suggestions and for each variable a list of constructors that 
>> could be matched on.
>> The motivation for the change is outlined in 
>> https://gitlab.haskell.org/ghc/ghc/-/issues/20642#note_390110, but I could 
>> easily be swayed not to do the case split. Which arguably is less 
>> surprising, as Andreas Abel points out.
>> 
>> Considering the other examples from my post, which would you prefer?
>> 
>> Cheers,
>> Sebastian
>> 
>> 
>> ___
>> ghc-devs mailing list
>> 
>> ghc-devs@haskell.org
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Weird problem involving untouchable type variables

2017-05-05 Thread Vladislav Zavialov
I can reproduce this on GHC 8.2.1 and GHC HEAD as well.

This looks like a bug in the ambiguity checker. Disabling it with
-XAllowAmbiguousTypes, as GHC suggests, makes the error go away.
Report it on GHC Trac [1]. As a work-around you could enable
-XAllowAmbiguousTypes — it should be safe as it merely disables
ambiguity checking, which is not necessary to ensure well-typedness.

[1] https://ghc.haskell.org/trac/ghc/

On Fri, May 5, 2017 at 1:11 PM, Wolfgang Jeltsch
 wrote:
> Hi!
>
> My inquiry on the users mailing list about untouchable types did not get
> a reply. Maybe it is better to ask my question here.
>
> Today I encountered for the first time the notion of an untouchable type
> variable. I have no clue what this is supposed to mean. The error
> message that talked about a type variable being untouchable is unfounded
> in my opinion. A minimal example that exposes my problem is the
> following:
>
>> {-# LANGUAGE Rank2Types, TypeFamilies #-}
>>
>> import GHC.Exts (Constraint)
>>
>> type family F a b :: Constraint
>>
>> data T b c = T
>>
>> f :: (forall b . F a b => T b c) -> a
>> f _ = undefined
>
> This results in the following error message from GHC 8.0.1:
>
>> Untouchable.hs:9:6: error:
>> • Couldn't match type ‘c0’ with ‘c’
>> ‘c0’ is untouchable
>>   inside the constraints: F a b
>>   bound by the type signature for:
>>  f :: F a b => T b c0
>>   at Untouchable.hs:9:6-37
>>   ‘c’ is a rigid type variable bound by
>> the type signature for:
>>   f :: forall a c. (forall b. F a b => T b c) -> a
>> at Untouchable.hs:9:6
>>   Expected type: T b c0
>> Actual type: T b c
>> • In the ambiguity check for ‘f’
>>   To defer the ambiguity check to use sites, enable AllowAmbiguousTypes
>>   In the type signature:
>> f :: (forall b. F a b => T b c) -> a
>
> I have no idea what the problem is. The type of f looks fine to me. The
> type variable c should be universally quantified at the outermost
> level. Apparently, c is not related to the type family F at all. Why
> does the type checker even introduce a type variable c0?
>
> All the best,
> Wolfgang
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: [commit: ghc] master: Embrace -XTypeInType, add -XStarIsType (d650729)

2018-06-15 Thread Vladislav Zavialov
Hi Gabor,

Indeed, I can reproduce this issue. This is happening because your locale
does not support Unicode. It is probably something like this:

$ locale -a
C
POSIX

Rather than fix this particular issue, I suggest we forbid Unicode in GHC
sources using the linter (the one that checks for lines too long, etc) to
avoid such problems in the future.

> Can this be done with unicode escapes somehow?

Yes, that would be '\x2605'.

All the best,
- Vladislav

On Jun 15, 2018 11:34, "Gabor Greif"  wrote:

> My `happy` chokes on the unicode sequence you added:
>
> (if isUnicode $1 then "★" else "*")
>
> Casn this be done with unicode escapes somehow?
>
> Cheers,
>
> Gabor
>
> PS: Happy Version 1.19.9 Copyright (c) 1993-1996 Andy Gill, Simon
> Marlow (c) 1997-2005 Simon Marlow
>
> On 6/14/18, g...@git.haskell.org  wrote:
> > Repository : ssh://g...@git.haskell.org/ghc
> >
> > On branch  : master
> > Link   :
> >
> http://ghc.haskell.org/trac/ghc/changeset/d650729f9a0f3b6aa5e6ef2d5fba337f6f70fa60/ghc
> >
> >>---
> >
> > commit d650729f9a0f3b6aa5e6ef2d5fba337f6f70fa60
> > Author: Vladislav Zavialov 
> > Date:   Thu Jun 14 15:02:36 2018 -0400
> >
> > Embrace -XTypeInType, add -XStarIsType
> >
> > Summary:
> > Implement the "Embrace Type :: Type" GHC proposal,
> > .../ghc-proposals/blob/master/proposals/0020-no-type-in-type.rst
> >
> > GHC 8.0 included a major change to GHC's type system: the Type ::
> Type
> > axiom. Though casual users were protected from this by hiding its
> > features behind the -XTypeInType extension, all programs written in
> GHC
> > 8+ have the axiom behind the scenes. In order to preserve backward
> > compatibility, various legacy features were left unchanged. For
> example,
> > with -XDataKinds but not -XTypeInType, GADTs could not be used in
> types.
> > Now these restrictions are lifted and -XTypeInType becomes a
> redundant
> > flag that will be eventually deprecated.
> >
> > * Incorporate the features currently in -XTypeInType into the
> >   -XPolyKinds and -XDataKinds extensions.
> > * Introduce a new extension -XStarIsType to control how to parse * in
> >   code and whether to print it in error messages.
> >
> > Test Plan: Validate
> >
> > Reviewers: goldfire, hvr, bgamari, alanz, simonpj
> >
> > Reviewed By: goldfire, simonpj
> >
> > Subscribers: rwbarton, thomie, mpickering, carter
> >
> > GHC Trac Issues: #15195
> >
> > Differential Revision: https://phabricator.haskell.org/D4748
> >
> >
> >>---
> >
> > d650729f9a0f3b6aa5e6ef2d5fba337f6f70fa60
> >  .gitignore |   1 +
> >  .gitmodules|   4 +-
> >  compiler/basicTypes/DataCon.hs |  22 +-
> >  compiler/basicTypes/Name.hs|  21 +-
> >  compiler/basicTypes/RdrName.hs |  96 +++-
> >  compiler/basicTypes/SrcLoc.hs  |   5 +-
> >  compiler/deSugar/DsMeta.hs |   7 +-
> >  compiler/hsSyn/Convert.hs  |  37 +-
> >  compiler/hsSyn/HsDecls.hs  |   9 +-
> >  compiler/hsSyn/HsExtension.hs  |  16 +-
> >  compiler/hsSyn/HsInstances.hs  |   5 -
> >  compiler/hsSyn/HsTypes.hs  | 117 +
> >  compiler/iface/IfaceType.hs|   8 +-
> >  compiler/main/DynFlags.hs  |  31 ++
> >  compiler/main/DynFlags.hs-boot |   1 +
> >  compiler/main/HscTypes.hs  |   3 +-
> >  compiler/parser/Lexer.x| 104 +++--
> >  compiler/parser/Parser.y   |  88 ++--
> >  compiler/parser/RdrHsSyn.hs| 190 
> >  compiler/prelude/PrelNames.hs  |   7 +-
> >  compiler/prelude/PrelNames.hs-boot |   3 +-
> >  compiler/prelude/TysWiredIn.hs |  24 +-
> >  compiler/rename/RnEnv.hs   |  43 +-
> >  compiler/rename/RnSource.hs|   4 +-
> >  compiler/rename/RnTypes.hs | 186 ++--
> >  compiler/typecheck/TcDeriv.hs  

Parser.y rewrite with parser combinators

2018-10-08 Thread Vladislav Zavialov
Hello devs,

Recently I've been working on a couple of parsing-related issues in
GHC. I implemented support for the -XStarIsType extension, fixed
parsing of the (!) type operator (Trac #15457), allowed using type
operators in existential contexts (Trac #15675).

Doing these tasks required way more engineering effort than I expected
from my prior experience working with parsers due to complexities of
GHC's grammar.

In the last couple of days, I've been working on Trac #1087 - a
12-year old parsing bug. After trying out a couple of approaches, to
my dismay I realised that fixing it properly (including support for
bang patterns inside infix constructors, etc) would require a complete
rewrite of expression and pattern parsing logic.

Worse yet, most of the work would be done outside Parser.y in Haskell
code instead, in RdrHsSyn helpers. When I try to keep the logic inside
Parser.y, in every design direction I face reduce/reduce conflicts.

The reduce/reduce conflicts are the worst.

Perhaps it is finally time to admit that Haskell syntax with all of
the GHC cannot fit into a LALR grammar?

The extent of hacks that we have right now just to make parsing
possible is astonishing. For instance, we have dedicated constructors
in HsExpr to make parsing patterns possible (EWildPat, EAsPat,
EViewPat, ELazyPat). That is, one of the fundamental types (that the
type checker operates on) has four additional constructors that exist
due to a reduce/reduce conflict between patterns and expressions.

I propose a complete rewrite of GHC's parser to use recursive descent
parsing with monadic parser combinators.

1. We could significantly simplify parsing logic by doing things in a
more direct manner. For instance, instead of parsing patterns as
expressions and then post-processing them, we could have separate
parsing logic for patterns and expressions.

2. We could fix long-standing parsing bugs like Trac #1087 because
recursive descent offers more expressive power than LALR (at the cost
of support for left recursion, which is not much of a loss in
practice).

3. New extensions to the grammar would require less engineering effort.

Of course, this rewrite is a huge chunk of work, so before I start, I
would like to know that this work would be accepted if done well.
Here's what I want to achieve:

* Comparable performance. The new parser could turn out to be faster
because it would do less post-processing, but it could be slower
because 'happy' does all the sorts of low-level optimisations. I will
consider this project a success only if comparable performance is
achieved.

* Correctness. The new parser should handle 100% of the syntactic
constructs that the current parser can handle.

* Error messages. The new error messages should be of equal or better
quality than existing ones.

* Elegance. The new parser should bring simplification to other parts
of the compiler (e.g. removal of pattern constructors from HsExpr).
And one of the design principles is to represent things by dedicated
data structures, in contrast to the current state of affairs where we
represent patterns as expressions, data constructor declarations as
types (before D5180), etc.

Let me know if this is a good/acceptable direction of travel. That's
definitely something that I personally would like to see happen.

All the best,
- Vladislav
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Parser.y rewrite with parser combinators

2018-10-08 Thread Vladislav Zavialov
> complex parsers written using parsing combinators is that they tend to be 
> quite difficult to modify and have any kind of assurance that now you haven't 
> broken something else

That's true regardless of implementation technique, parsers are rather
delicate. A LALR-based parser generator does provide more information
when it detects shift/reduce and reduce/reduce conflicts, but I never
found this information useful. It was always quite the opposite of
being helpful - an indication that a LALR parser could not handle my
change and I had to look for workarounds.

> With a combinator based parser, you basically have to do program 
> verification, or more pragmatically, have a large test suite and hope that 
> you tested everything.

Even when doing modifications to Parser.y, I relied mainly on the test
suite to determine whether my change was right (and the test suite
always caught many issues). A large test suite is the best approach
both for 'happy'-based parsers and for combinator-based parsers.

> and then have a separate pass that validates and fixes up the results

That's where my concern lies. This separate pass is confusing (at
least for me - it's not the most straightforward thing to parse
something incorrectly and then restructure it), it is hard to modify,
it does not handle corner cases (e.g. #1087).

Since we have all this Haskell code that does a significant portion of
processing, why even bother with having a LALR pass before it?

> namely we can report better parse errors

I don't think that's true, we can achieve better error messages with
recursive descent.

> Also, with the new rewrite of HsSyn, we should be able to mark such 
> constructors as only usable in the parsing pass, so later passes wouldn't 
> need to worry about them.

Not completely true, GhcPs-parametrized structures are the final
output of parsing, so at least the renamer will face these
constructors.

On Tue, Oct 9, 2018 at 1:00 AM Iavor Diatchki  wrote:
>
> Hello,
>
> my experience with complex parsers written using parsing combinators is that 
> they tend to be quite difficult to modify and have any kind of assurance that 
> now you haven't broken something else.   While reduce-reduce errors are 
> indeed annoying, you at least know that there is some sort of issue you need 
> to address.   With a combinator based parser, you basically have to do 
> program verification, or more pragmatically, have a large test suite and hope 
> that you tested everything.
>
> I think the current approach is actually quite reasonable:  use the Happy 
> grammar to parse out the basic structure of the program, without trying to be 
> completely precise, and then have a separate pass that validates and fixes up 
> the results.   While this has the draw-back of some constructors being in the 
> "wrong place", there are also benefits---namely we can report better parse 
> errors.  Also, with the new rewrite of HsSyn, we should be able to mark such 
> constructors as only usable in the parsing pass, so later passes wouldn't 
> need to worry about them.
>
> -Iavor
>
>
>
>
>
>
>
>
>
>
>
> On Mon, Oct 8, 2018 at 2:26 PM Simon Peyton Jones via ghc-devs 
>  wrote:
>>
>> I'm no parser expert, but a parser that was easier to understand and modify, 
>> and was as fast as the current one, sounds good to me.
>>
>> It's a tricky area though; e.g. the layout rule.
>>
>> Worth talking to Simon Marlow.
>>
>> Simon
>>
>>
>>
>> | -Original Message-
>> | From: ghc-devs  On Behalf Of Vladislav
>> | Zavialov
>> | Sent: 08 October 2018 21:44
>> | To: ghc-devs 
>> | Subject: Parser.y rewrite with parser combinators
>> |
>> | Hello devs,
>> |
>> | Recently I've been working on a couple of parsing-related issues in
>> | GHC. I implemented support for the -XStarIsType extension, fixed
>> | parsing of the (!) type operator (Trac #15457), allowed using type
>> | operators in existential contexts (Trac #15675).
>> |
>> | Doing these tasks required way more engineering effort than I expected
>> | from my prior experience working with parsers due to complexities of
>> | GHC's grammar.
>> |
>> | In the last couple of days, I've been working on Trac #1087 - a
>> | 12-year old parsing bug. After trying out a couple of approaches, to
>> | my dismay I realised that fixing it properly (including support for
>> | bang patterns inside infix constructors, etc) would require a complete
>> | rewrite of expression and pattern parsing logic.
>> |
>> | Worse yet, most of the work would be done outside Parser.y in Haskell
>> | co

Re: Parser.y rewrite with parser combinators

2018-10-08 Thread Vladislav Zavialov
That is a very good point, thank you! I have not thought about
incremental parsing. That's something I need to research before I
start the rewrite.
On Tue, Oct 9, 2018 at 1:06 AM Alan & Kim Zimmerman  wrote:
>
> I am not against this proposal,  but want to raise a possible future concern.
>
> As part of improving the haskell tooling environment I am keen on making GHC 
> incremental, and have started a proof of concept based in the same techniques 
> as used in the tree-sitter library.
>
> This is achieved by modifying happy, and requires minimal changes to the 
> existing Parser.y.
>
> It would be unfortunate if this possibility was prevented by this rewrite.
>
> Alan
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Parser.y rewrite with parser combinators

2018-10-08 Thread Vladislav Zavialov
> I'm also not sure what exactly parser combinators provide over Happy.

Parser combinators offer backtracking. With 'happy' we get the
guarantee that we parse in linear time, but we lose it because of
post-processing that is not guaranteed to be linear. I think it'd be
easier to backtrack in the parser itself rather than in a later pass.


On Tue, Oct 9, 2018 at 6:47 AM Vanessa McHale  wrote:
>
> I actually have some experience in this department, having authored both 
> madlang and language-ats. Parsers using combinators alone are more brittle 
> than parsers using Happy, at least for human-facing languages.
>
> I'm also not sure what exactly parser combinators provide over Happy. It has 
> macros that can emulate e.g. between, many. Drawing up a minimal example 
> might be a good idea.
>
>
> On 10/08/2018 05:24 PM, Vladislav Zavialov wrote:
>
> complex parsers written using parsing combinators is that they tend to be 
> quite difficult to modify and have any kind of assurance that now you haven't 
> broken something else
>
> That's true regardless of implementation technique, parsers are rather
> delicate. A LALR-based parser generator does provide more information
> when it detects shift/reduce and reduce/reduce conflicts, but I never
> found this information useful. It was always quite the opposite of
> being helpful - an indication that a LALR parser could not handle my
> change and I had to look for workarounds.
>
> With a combinator based parser, you basically have to do program 
> verification, or more pragmatically, have a large test suite and hope that 
> you tested everything.
>
> Even when doing modifications to Parser.y, I relied mainly on the test
> suite to determine whether my change was right (and the test suite
> always caught many issues). A large test suite is the best approach
> both for 'happy'-based parsers and for combinator-based parsers.
>
> and then have a separate pass that validates and fixes up the results
>
> That's where my concern lies. This separate pass is confusing (at
> least for me - it's not the most straightforward thing to parse
> something incorrectly and then restructure it), it is hard to modify,
> it does not handle corner cases (e.g. #1087).
>
> Since we have all this Haskell code that does a significant portion of
> processing, why even bother with having a LALR pass before it?
>
> namely we can report better parse errors
>
> I don't think that's true, we can achieve better error messages with
> recursive descent.
>
> Also, with the new rewrite of HsSyn, we should be able to mark such 
> constructors as only usable in the parsing pass, so later passes wouldn't 
> need to worry about them.
>
> Not completely true, GhcPs-parametrized structures are the final
> output of parsing, so at least the renamer will face these
> constructors.
>
> On Tue, Oct 9, 2018 at 1:00 AM Iavor Diatchki  
> wrote:
>
> Hello,
>
> my experience with complex parsers written using parsing combinators is that 
> they tend to be quite difficult to modify and have any kind of assurance that 
> now you haven't broken something else.   While reduce-reduce errors are 
> indeed annoying, you at least know that there is some sort of issue you need 
> to address.   With a combinator based parser, you basically have to do 
> program verification, or more pragmatically, have a large test suite and hope 
> that you tested everything.
>
> I think the current approach is actually quite reasonable:  use the Happy 
> grammar to parse out the basic structure of the program, without trying to be 
> completely precise, and then have a separate pass that validates and fixes up 
> the results.   While this has the draw-back of some constructors being in the 
> "wrong place", there are also benefits---namely we can report better parse 
> errors.  Also, with the new rewrite of HsSyn, we should be able to mark such 
> constructors as only usable in the parsing pass, so later passes wouldn't 
> need to worry about them.
>
> -Iavor
>
>
>
>
>
>
>
>
>
>
>
> On Mon, Oct 8, 2018 at 2:26 PM Simon Peyton Jones via ghc-devs 
>  wrote:
>
> I'm no parser expert, but a parser that was easier to understand and modify, 
> and was as fast as the current one, sounds good to me.
>
> It's a tricky area though; e.g. the layout rule.
>
> Worth talking to Simon Marlow.
>
> Simon
>
>
>
> | -Original Message-
> | From: ghc-devs  On Behalf Of Vladislav
> | Zavialov
> | Sent: 08 October 2018 21:44
> | To: ghc-devs 
> | Subject: Parser.y rewrite with parser combinators
> |
> | Hello devs,
&g

Re: Parser.y rewrite with parser combinators

2018-10-09 Thread Vladislav Zavialov
> For example, if we see `do K x y z ...`, we don't know whether we're parsing 
> an expression or a pattern before we can see what's in the ..., which is 
> arbitrarily later than the ambiguity starts. Of course, while we can write a 
> backtracking parser with combinators, doing so doesn't seem like a 
> particularly swell idea.

Backtracking is exactly what I wanted to do here. Perhaps it is lack
of theoretical background on my behalf showing, but I do not see
downsides to it. It supposedly robs us of linear time guarantee, but
consider this.

With 'happy' and post-processing we

1. Parse into an expression (linear in the amount of tokens)
2. If it turns out we needed a pattern, rejig (linear in the size of expression)

With parser combinators

1. Parse into an expression (linear in the amount of tokens)
2. If it turns out we needed a pattern, backtrack and parse into a
pattern (linear in the amount of tokens)

Doesn't post-processing that we do today mean that we don't actually
take advantage of the linearity guarantee?

On Tue, Oct 9, 2018 at 3:31 AM Richard Eisenberg  wrote:
>
> I, too, have wondered about this.
>
> A pair of students this summer were working on merging the type-level and 
> term-level parsers, in preparation for, e.g., visible dependent 
> quantification in terms (not to mention dependent types). If successful, this 
> would have been an entirely internal refactor. In any case, it seemed 
> impossible to do in an LALR parser, so the students instead parsed into a new 
> datatype Term, which then got converted either to an HsExpr, an HsPat, or an 
> HsType. The students never finished. But the experience suggests that moving 
> away from LALR might be a good move.
>
> All that said, I'm not sure how going to parser combinators stops us from 
> needing an intermediate datatype to parse expressions/patterns into before we 
> can tell whether they are expressions or patterns. For example, if we see `do 
> K x y z ...`, we don't know whether we're parsing an expression or a pattern 
> before we can see what's in the ..., which is arbitrarily later than the 
> ambiguity starts. Of course, while we can write a backtracking parser with 
> combinators, doing so doesn't seem like a particularly swell idea. This isn't 
> an argument against using parser combinators, but fixing the 
> pattern/expression ambiguity was a "pro" listed for them -- except I don't 
> think this is correct.
>
> Come to think of it, the problem with parsing expressions vs. types would 
> persist just as much in the combinator style as it does in the LALR style, so 
> perhaps I've talked myself into a corner. Nevertheless, it seems awkward to 
> do half the parsing in one language (happy) and half in another.
>
> Richard
>
> > On Oct 8, 2018, at 6:38 PM, Vladislav Zavialov  
> > wrote:
> >
> > That is a very good point, thank you! I have not thought about
> > incremental parsing. That's something I need to research before I
> > start the rewrite.
> > On Tue, Oct 9, 2018 at 1:06 AM Alan & Kim Zimmerman  
> > wrote:
> >>
> >> I am not against this proposal,  but want to raise a possible future 
> >> concern.
> >>
> >> As part of improving the haskell tooling environment I am keen on making 
> >> GHC incremental, and have started a proof of concept based in the same 
> >> techniques as used in the tree-sitter library.
> >>
> >> This is achieved by modifying happy, and requires minimal changes to the 
> >> existing Parser.y.
> >>
> >> It would be unfortunate if this possibility was prevented by this rewrite.
> >>
> >> Alan
> > ___
> > ghc-devs mailing list
> > ghc-devs@haskell.org
> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Parser.y rewrite with parser combinators

2018-10-09 Thread Vladislav Zavialov
> which in turn is a strong hint that the language you're trying to parse has 
> dark corners. IMHO every language designer and e.g. everybody proposing a 
> syntactic extension to GHC should try to fit this into a grammar for Happy 
> *before* proposing that extension

I do agree here! Having a language that has a context-free grammar
would be superb. The issue is that Haskell with GHC extensions is
already far from this point and it isn't helping to first pretend that
it is, and then do half of the parsing in post-processing because it
has no such constraints.
On Tue, Oct 9, 2018 at 10:23 AM Sven Panne  wrote:
>
> Am Di., 9. Okt. 2018 um 00:25 Uhr schrieb Vladislav Zavialov 
> :
>>
>> [...] That's true regardless of implementation technique, parsers are rather
>> delicate.
>
>
> I think it's not the parsers themselves which are delicate, it is the 
> language that they should recognize.
>
>>
>> A LALR-based parser generator does provide more information
>> when it detects shift/reduce and reduce/reduce conflicts, but I never
>> found this information useful. It was always quite the opposite of
>> being helpful - an indication that a LALR parser could not handle my
>> change and I had to look for workarounds. [...]
>
>
> Not that this would help at this point, but: The conflicts reported by parser 
> generators like Happy are *extremely* valuable, they hint at tricky/ambiguous 
> points in the grammar, which in turn is a strong hint that the language 
> you're trying to parse has dark corners. IMHO every language designer and 
> e.g. everybody proposing a syntactic extension to GHC should try to fit this 
> into a grammar for Happy *before* proposing that extension. If you get 
> conflicts, it is a very strong hint that the language is hard to parse by 
> *humans*, too, which is the most important thing to consider. Haskell already 
> has tons of syntactic warts which can only be parsed by infinite lookahead, 
> which is only a minor technical problem, but a major usablity problem. 
> "Programs are meant to be read by humans and only incidentally for computers 
> to execute." (D.E.K.)  ;-)
>
> The situation is a bit strange: We all love strong guarantees offered by type 
> checking, but somehow most people shy away from "syntactic type checking" 
> offered by parser generators. Parser combinators are the Python of parsing: 
> Easy to use initially, but a maintenance hell in the long run for larger 
> projects...
>
> Cheers,
>S.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Parser.y rewrite with parser combinators

2018-10-09 Thread Vladislav Zavialov
>  backtrack while backtracking <...> I can almost guarantee that this will 
> happen unless you use formal methods

That is a great idea, I can track backtracking depth in a type-level
natural number and make sure it doesn't go over 1 (or add
justification with performance analysis when it does). Formal methods
for the win :-)
On Tue, Oct 9, 2018 at 10:27 AM Sven Panne  wrote:
>
> Am Di., 9. Okt. 2018 um 09:18 Uhr schrieb Vladislav Zavialov 
> :
>>
>> [...] With parser combinators
>>
>> 1. Parse into an expression (linear in the amount of tokens)
>> 2. If it turns out we needed a pattern, backtrack and parse into a
>> pattern (linear in the amount of tokens) [...]
>
>
> In a larger grammar implemented by parser combinators it is quite hard to 
> guarantee that you don't backtrack while backtracking, which would easily 
> result in exponential runtime. And given the size of the language GHC 
> recognizes, I can almost guarantee that this will happen unless you use 
> formal methods. :-)
>
> Cheers,
>S.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Parser.y rewrite with parser combinators

2018-10-09 Thread Vladislav Zavialov
It's a nice way to look at the problem, and we're facing the same
issues as with insufficiently powerful type systems. LALR is the Go of
parsing in this case :)

I'd rather write Python and have a larger test suite than deal with
lack of generics in Go, if you allow me to take the analogy that far.

In fact, we do have a fair share of boilerplate in our current grammar
due to lack of parametrisation. That's another issue that would be
solved by parser combinators (or by a fancier parser generator, but
I'm not aware of such one).

On Tue, Oct 9, 2018 at 1:52 PM Simon Peyton Jones  wrote:
>
> We all love strong guarantees offered by type checking, but somehow most 
> people shy away from "syntactic type checking" offered by parser generators. 
> Parser combinators are the Python of parsing: Easy to use initially, but a 
> maintenance hell in the long run for larger projects...
>
> I’d never thought of it that way before – interesting.
>
>
>
> Simon
>
>
>
> From: ghc-devs  On Behalf Of Sven Panne
> Sent: 09 October 2018 08:23
> To: vlad.z.4...@gmail.com
> Cc: GHC developers 
> Subject: Re: Parser.y rewrite with parser combinators
>
>
>
> Am Di., 9. Okt. 2018 um 00:25 Uhr schrieb Vladislav Zavialov 
> :
>
> [...] That's true regardless of implementation technique, parsers are rather
> delicate.
>
>
>
> I think it's not the parsers themselves which are delicate, it is the 
> language that they should recognize.
>
>
>
> A LALR-based parser generator does provide more information
> when it detects shift/reduce and reduce/reduce conflicts, but I never
> found this information useful. It was always quite the opposite of
> being helpful - an indication that a LALR parser could not handle my
> change and I had to look for workarounds. [...]
>
>
>
> Not that this would help at this point, but: The conflicts reported by parser 
> generators like Happy are *extremely* valuable, they hint at tricky/ambiguous 
> points in the grammar, which in turn is a strong hint that the language 
> you're trying to parse has dark corners. IMHO every language designer and 
> e.g. everybody proposing a syntactic extension to GHC should try to fit this 
> into a grammar for Happy *before* proposing that extension. If you get 
> conflicts, it is a very strong hint that the language is hard to parse by 
> *humans*, too, which is the most important thing to consider. Haskell already 
> has tons of syntactic warts which can only be parsed by infinite lookahead, 
> which is only a minor technical problem, but a major usablity problem. 
> "Programs are meant to be read by humans and only incidentally for computers 
> to execute." (D.E.K.)  ;-)
>
>
>
> The situation is a bit strange: We all love strong guarantees offered by type 
> checking, but somehow most people shy away from "syntactic type checking" 
> offered by parser generators. Parser combinators are the Python of parsing: 
> Easy to use initially, but a maintenance hell in the long run for larger 
> projects...
>
>
>
> Cheers,
>
>S.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Parser.y rewrite with parser combinators

2018-10-09 Thread Vladislav Zavialov
I agree with you. This example puts a nail on the coffin of the
backtracking approach.

I will have to think of something else, and at this point a full
rewrite to parser combinators does not seem as appealing.

Thanks!

On Tue, Oct 9, 2018 at 4:45 PM Richard Eisenberg  wrote:
>
> I think one problem is that we don't even have bounded levels of 
> backtracking, because (with view patterns) you can put expressions into 
> patterns.
>
> Consider
>
> > f = do K x (z -> ...
>
> Do we have a constructor pattern with a view pattern inside it? Or do we have 
> an expression with a required visible type application and a function type? 
> (This last bit will be possible only with visible dependent quantification in 
> terms, but I'm confident that Vlad will appreciate the example.) We'll need 
> nested backtracking to sort this disaster out -- especially if we have 
> another `do` in the ...
>
> What I'm trying to say here is that tracking the backtracking level in types 
> doesn't seem like it will fly (tempting though it may be).
>
> Richard
>
> > On Oct 9, 2018, at 7:08 AM, Vladislav Zavialov  
> > wrote:
> >
> > It's a nice way to look at the problem, and we're facing the same
> > issues as with insufficiently powerful type systems. LALR is the Go of
> > parsing in this case :)
> >
> > I'd rather write Python and have a larger test suite than deal with
> > lack of generics in Go, if you allow me to take the analogy that far.
> >
> > In fact, we do have a fair share of boilerplate in our current grammar
> > due to lack of parametrisation. That's another issue that would be
> > solved by parser combinators (or by a fancier parser generator, but
> > I'm not aware of such one).
> >
> > On Tue, Oct 9, 2018 at 1:52 PM Simon Peyton Jones  
> > wrote:
> >>
> >> We all love strong guarantees offered by type checking, but somehow most 
> >> people shy away from "syntactic type checking" offered by parser 
> >> generators. Parser combinators are the Python of parsing: Easy to use 
> >> initially, but a maintenance hell in the long run for larger projects...
> >>
> >> I’d never thought of it that way before – interesting.
> >>
> >>
> >>
> >> Simon
> >>
> >>
> >>
> >> From: ghc-devs  On Behalf Of Sven Panne
> >> Sent: 09 October 2018 08:23
> >> To: vlad.z.4...@gmail.com
> >> Cc: GHC developers 
> >> Subject: Re: Parser.y rewrite with parser combinators
> >>
> >>
> >>
> >> Am Di., 9. Okt. 2018 um 00:25 Uhr schrieb Vladislav Zavialov 
> >> :
> >>
> >> [...] That's true regardless of implementation technique, parsers are 
> >> rather
> >> delicate.
> >>
> >>
> >>
> >> I think it's not the parsers themselves which are delicate, it is the 
> >> language that they should recognize.
> >>
> >>
> >>
> >> A LALR-based parser generator does provide more information
> >> when it detects shift/reduce and reduce/reduce conflicts, but I never
> >> found this information useful. It was always quite the opposite of
> >> being helpful - an indication that a LALR parser could not handle my
> >> change and I had to look for workarounds. [...]
> >>
> >>
> >>
> >> Not that this would help at this point, but: The conflicts reported by 
> >> parser generators like Happy are *extremely* valuable, they hint at 
> >> tricky/ambiguous points in the grammar, which in turn is a strong hint 
> >> that the language you're trying to parse has dark corners. IMHO every 
> >> language designer and e.g. everybody proposing a syntactic extension to 
> >> GHC should try to fit this into a grammar for Happy *before* proposing 
> >> that extension. If you get conflicts, it is a very strong hint that the 
> >> language is hard to parse by *humans*, too, which is the most important 
> >> thing to consider. Haskell already has tons of syntactic warts which can 
> >> only be parsed by infinite lookahead, which is only a minor technical 
> >> problem, but a major usablity problem. "Programs are meant to be read by 
> >> humans and only incidentally for computers to execute." (D.E.K.)  
> >> ;-)
> >>
> >>
> >>
> >> The situation is a bit strange: We all love strong guarantees offered by 
> >> type checking, but somehow most people shy away from "syntactic type 
> >> checking" offered by parser generators. Parser combinators are the Python 
> >> of parsing: Easy to use initially, but a maintenance hell in the long run 
> >> for larger projects...
> >>
> >>
> >>
> >> Cheers,
> >>
> >>   S.
> > ___
> > ghc-devs mailing list
> > ghc-devs@haskell.org
> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Treatment of unknown pragmas

2018-10-16 Thread Vladislav Zavialov
What about introducing -fno-warn-pragma=XXX? People who use HLint will add
-fno-warn-pragma=HLINT to their build configuration.

On Tue, Oct 16, 2018, 20:51 Ben Gamari  wrote:

> Hi everyone,
>
> Recently Neil Mitchell opened a pull request [1] proposing a single-line
> change: Adding `{-# HLINT ... #-}` to the list of pragmas ignored by the
> lexer. I'm a bit skeptical of this idea. Afterall, adding cases to the
> lexer for every tool that wants a pragma seems quite unsustainable.
>
> On the other hand, a reasonable counter-argument could be made on the
> basis of the Haskell Report, which specifically says that
> implementations should ignore unrecognized pragmas. If GHC did this
> (instead of warning, as it now does) then this wouldn't be a problem.
>
> Of course, silently ignoring mis-typed pragmas sounds terrible from a
> usability perspective. For this reason I proposed that the following
> happen:
>
>  * The `{-# ... #-}` syntax be reserved in particular for compilers (it
>largely already is; the Report defines it as "compiler pragma"
>syntax). The next Report should also allow implementations to warn in
>the case of unrecognized pragmas.
>
>  * We introduce a "tool pragma" convention (perhaps even standardized in
>the next Report). For this we can follow the model of Liquid Haskell:
>`{-@ $TOOL_NAME ... @-}`.
>
> Does this sound sensible?
>
> Cheers,
>
> - Ben
>
>
> [1] https://github.com/ghc/ghc/pull/204
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Treatment of unknown pragmas

2018-10-17 Thread Vladislav Zavialov
> And sacrifice checking for misspelled pragma names in those namespaces?  Sure 
> we can say {-# TOOL FOO .. #-} is ignored by GHC, but then nothing wil notice 
> if you say {-# TOOL HLNIT ... #-} by mistake.

Yes! But we can't have the whitelist of pragmas hardcoded in GHC, as
there may be arbitrarily named tools out there. Only the user knows
what tools they use, so they must maintain their own whitelist in the
build configuration. That's why we should have -Wunrecognized-pramas
-Wno-pragma=HLINT, -Wno-pragma=LIQUID, ...
On Wed, Oct 17, 2018 at 5:23 PM Simon Marlow  wrote:
>
> On Wed, 17 Oct 2018 at 15:02, Ben Gamari  wrote:
>>
>> Simon Marlow  writes:
>>
>> > Simon - GHC provides some protection against mistyped pragma names, in the
>> > form of the -Wunrecognised-pragmas warning, but only for {-# ... #-}
>> > pragmas. If tools decide to use their own pragma syntax, they don't benefit
>> > from this. That's one downside, in addition to the others that Neil
>> > mentioned.
>> >
>> > You might say we shouldn't care about mistyped pragma names. If the user
>> > accidentally writes {- HLNIT -} and it is silently ignored, that's not our
>> > problem. OK, but we cared about it enough for the pragmas that GHC
>> > understands to add the special warning, and it's reasonable to expect that
>> > HLint users also care about it.
>> >
>> If this is the case then in my opinion HLint should be the one that
>> checks for mis-spelling.
>
>
> But there's no way that HLint can know what is a misspelled pragma name.
>
>> If we look beyond HLint, there is no way that
>> GHC could know generally what tokens are misspelled pragmas and which
>> are tool names.
>
>
> Well this is the problem we created by adding -Wunrecognised-pragmas :)  Now 
> GHC has to know what all the correctly-spelled pragma names are, and the 
> HLint diff is just following this path.
>
> Arguably -Wunrecognised-pragmas is ill-conceived.  I'm surprised we didn't 
> have this discussion when it was added (or maybe we did?). But since we have 
> it, it comes with an obligation to have a centralised registry of pragma 
> names, which is currently in GHC. (it doesn't have to be in the source code, 
> of course)
>
>> I'm trying to view the pragma question from the perspective of setting a
>> precedent for other tools. If a dozen Haskell tools were to approach us
>> tomorrow and ask for similar treatment to HLint it's clear that
>> hardcoding pragma lists in the lexer would be unsustainable.
>>
>> Is this likely to happen? Of course not. However, it is an indication to
>> me that the root cause of this current debate is our lack of a good
>> extensible pragmas. It seems to me that introducing a tool pragma
>> convention, from which tool users can claim namespaces at will, is the
>> right way to fix this.
>
>
> And sacrifice checking for misspelled pragma names in those namespaces?  Sure 
> we can say {-# TOOL FOO .. #-} is ignored by GHC, but then nothing wil notice 
> if you say {-# TOOL HLNIT ... #-} by mistake.  If we decide to do that then 
> fine, it just seems like an inconsistent design.
>
> Cheers
> Simon
>
>>
>>
>> Cheers,
>>
>> - Ben
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: [GHC DevOps Group] The future of Phabricator

2018-11-01 Thread Vladislav Zavialov
To put my 2¢ – I will be happy with whatever service provides the most
reliable CI.

In terms of workflow, I like Ben's suggestion:

  * Consider a PR to be a stack of differentials, with each commit
being an atomic change in that stack.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Type-level sized Word literals???

2023-10-30 Thread Vladislav Zavialov via ghc-devs
> I am working on some code where it is useful to have types indexed by a 
> 16-bit unsigned value.

This is great to hear. I've always wanted to make it possible to
promote all numeric types: Natural, Word8, Word16, Word32, Word64,
Integer, Int8, Int16, Int32, Int64. (Then, as the next step, even the
floating-point types Float and Double). I see it as a step towards
universal promotion, i.e. being able to use any data type as a kind.

The problem is that such a change would require a GHC proposal, and I
don't have a strong motivating use case to write one. But you seem to
have one! If you'd like to co-author a GHC proposal and if the
proposal gets accepted, I can implement the feature.

Here's how I imagine it could work.

1. Currently, a type-level literal like `15` is inferred to have kind
`Nat` (and `Nat` is a synonym for `Natural` nowadays). At the
term-level, however, the type of `15` is `forall {a}. Num a => a`. I'd
like to follow the term-level precedent as closely as possible, except
we don't have type class constraints in kinds, so it's going to be
simply `15 :: forall {k}. k`.

2. The desugaring should also follow the term-level precedent. `15`
actually stands for `fromInteger (15 :: Integer)`, and I expect no
less at the type level, where we could introduce a type family
`FromInteger :: Integer -> k`, with the following instances

   type instance FromInteger @Natural n = NaturalFromInteger n
   type instance FromInteger @Word8 n = Word8FromInteger n
   type instance FromInteger @Word16 n = Word16FromInteger n
   ...

   The helper type families `NaturalFromInteger`, `Word8FromInteger`,
`Word16FromInteger` etc. are partial, e.g. `NaturalFromInteger (10 ::
Integer)` yields `10 :: Natural` whereas `NaturalFromInteger (-10)` is
stuck.

I have a fairly good idea of what it'd take to implement this (i.e.
the changes to the GHC parser, type checker, and libraries), and the
change has been on my mind for a while. The use case that you have
might be the last piece of the puzzle to get this thing rolling.

Can you tell more about the code you're writing? Would it be possible
to use it as the basis for the "Motivation" section of a GHC proposal?

Vlad

On Mon, Oct 30, 2023 at 6:06 AM Viktor Dukhovni  wrote:
>
> I am working on some code where it is useful to have types indexed by a
> 16-bit unsigned value.
>
> Presently, I am using type-level naturals and having to now and then
> provide witnesses that a 'Nat' value I am working with is at most 65535.
>
> Also, perhaps there's some inefficiency here, as Naturals have two
> constructors, and so a more complex representation with (AFAIK) no
> unboxed forms.
>
> I was wondering what it would take to have type-level fixed-size
> Words (Word8, Word16, Word32, Word64) to go along with the Nats?
>
> It looks like some of the machinery (KnownWord16, SomeWord16, wordVal16,
> etc.) can be copied straight out of GHC.TypeNats with minor changes, and
> that much works, but the three major things that are't easily done seem
> to be:
>
> - There are it seems no TypeReps for types of Kind Word16, so one can't
>   have (Typeable (Foo w)) with (w :: Word16).
>
> - There are no literals of a promoted Word16 Kind.
>
> type Foo :: Word16 -> Type
> data Foo w = MkFoo Int
>
> -- 1 has Kind 'Natural' (a.k.a. Nat)
> x = MkFoo 13 :: Foo 1 -- Rejected,
>
> -- The new ExtendedLiterals syntax does not help
> --
> x = MkFoo 13 :: Foo (W16# 1#Word16) -- Syntax error!
>
> - There are unsurprisingly also no built-in 'KnownWord16' instances
>   for any hypothetical type-level literals of Kind Word16.
>
> Likely the use case for type-level fixed-size words is too specialised
> to rush to shoehorn into GHC, but is there something on the not too
> distant horizon that would make it easier and reasonable to have
> fixed-size unsigned integral type literals available?
>
> [ I don't see a use-case for unsigned versions, they can trivially be
>   represented by the unsigned value of the same width. ]
>
> With some inconvenience, in many cases I can perhaps synthesise Proxies
> for types of Kind Word16, and just never use literals directly.
>
> --
> Viktor.
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Type-level sized Word literals???

2023-10-30 Thread Vladislav Zavialov via ghc-devs
I agree caution is warranted, but I still want the type level to behave as
closely as possible to the term level, where literals are currently
overloaded.

I don't care if it's monomorphic literals everywhere or overloaded literals
everywhere, but I oppose a discrepancy.

Vlad

On Mon, Oct 30, 2023, 10:05 Simon Peyton Jones 
wrote:

> I'm pretty cautious about attempting to replicate type classes (or a
> weaker version thereof) at the kind level.  An alternative would be to us
> *non-overloaded* literals.
>
> Simon
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs