Fwd: Will there be a GHC 9.2.6?

2022-12-30 Thread Clinton Mead
Hi All

I just noticed Haskell Language Server (HLS) 1.9 has been released, which
supports GHC 9.2.5. My organisation is currently using GHC 9.2.2 and I
don't see any immediate need to jump to GHC 9.4, it would be good to settle
on the latest most stable version of the GHC 9.2 series as it includes all
the features we need at the moment. Also I understand that HLS tends to
have long term(ish) support for the latest GHC in a release series.

I'd rather not do this upgrade twice so I was just wondering whether it has
been decided or not whether there will be a 9.2.6 release (if this is still
unknown that's okay).

I did notice this page: https://gitlab.haskell.org/ghc/ghc/-/milestones/385 but
was unsure whether this page was autogenerated or a deliberate indication
that there will be a GHC 9.2.6 at some point.

Thanks,
Clinton
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


'Caching' of results of default instance definitions

2022-11-22 Thread Clinton Mead
Hi All

My apologies if this is the wrong place to post questions about GHC
internals from a non-GHC dev, I thought about using GHC Users but it seems
quiet on that mailing list and I think my question relates closely to how
GHC implements things, so I suspect only a dev could answer anyway.

I have posted the following StackOverflow question:
https://stackoverflow.com/questions/74540639/when-are-the-results-of-default-methods-in-instances-cached

I won't repeat the full contents here, but basically my question is that if
I have a class with a "method" which has a default definition, and that
default definition has no arguments on the LHS, will a separate "instance"
of that default definition be created for each instance of that class that
inherits that default definition? The important consequence of that being
that the default definition is only computed once per type.

Or is the default definition treated as a definition generalised over a
typeclass, and hence having a hidden typeclass variable and therefore
treated as a function, so it needs to be recomputed every time it's
referenced?

Any feedback or references will be appreciated.

Thanks,
Clinton
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Why can't arguments be levity polymorphic for inline functions?

2021-10-08 Thread Clinton Mead
Ben,

The suggestion of erroring if the inline pragma was not there was just
because I thought it would be better than silently doing something
different. But that's just a subjective opinion, it's not core to what I'm
proposing.

Indeed there are two other options:

1. Make levity polymorphic functions implicitly inline OR
2. Compile a version which wraps all the levity polymorphism in boxes.

Either approach would mean the program would still be accepted with or
without the pragma. Whether either of them are a good idea is debatable,
but it shows it's not necessary to require a pragma.

So if it's important that excluding a pragma doesn't result in a program
being rejected, either of the above options would solve that issue.

On Sat, Oct 9, 2021 at 2:06 AM Ben Gamari  wrote:

> Chris Smith  writes:
>
> > On Fri, Oct 8, 2021 at 10:51 AM Ben Gamari  wrote:
> >
> >> In my mind the fundamental problem with this approach is that it means
> >> that a program's acceptance by the compiler hinges upon pragmas.
> >> This is a rather significant departure from the status quo, where one
> >> can remove all pragmas and still end up with a well-formed program.
> >> In this sense, pragmas aren't really part of the Haskell language but
> >> are rather bits of interesting metadata that the compiler may or may not
> >> pay heed to.
> >>
> >
> > I don't believe this is really the status quo.  In particular, the
> pragmas
> > relating to overlapping instances definitely do affect whether a program
> > type-checks or not.
>
> Yes, this is a fair point. Moreover, the same can be said of
> LANGUAGE pragmas more generally. I will rephrase my statement to reflect
> what was in my head when I initially wrote it:
>
> >> In my mind the fundamental problem with this approach is that it means
> >> that a program's acceptance by the compiler hinges upon INLINE pragmas.
> >> This is a rather significant departure from the status quo, where one
> >> can remove all INLINE, INLINEABLE, RULES, and SPECIALISE pragmas and
> >> still end up with a well-formed program.
>
> These pragmas all share the property that they don't change program
> semantics but rather merely affect operational behavior. Consequently,
> they should not change whether a program should be accepted.
>
> Cheers,
>
> - Ben
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Why can't arguments be levity polymorphic for inline functions?

2021-10-08 Thread Clinton Mead
Thanks for your reply Andreas.

Just some further thoughts, perhaps we don't even require inline.

Correct me if I'm wrong, but couldn't we compile even functions with levity
polymorphic arguments by just boxing all the arguments?

This would also mean the caller would have to box arguments before passing.

You'd also need to box any working variables inside the function with
levity other than `Type`.

This to a certain extent defeats the purpose of levity polymorphism when
it's intended as an optimisation to avoid boxed types, but it does give a
fallback we can always use.

Yes, if you inline you can get code bloat. But that's a risk when you
inline any function.

Cheers,
Clinton

On Fri, Oct 8, 2021 at 11:39 PM Andreas Klebinger 
wrote:

> Hey Clinton,
>
> I think the state of things is best summarised it's in principle possible
> to implement. But it's unclear how best to do so
> or even if it's worth having this feature at all.
>
> The biggest issue being code bloat.
>
> As you say a caller could create it's own version of the function with the
> right kind of argument type.
> But that means duplicating the function for every use site (although some
> might be able to be commoned up). Potentially causing a lot of code
> bloat and compile time overhead.
>
> In a similar fashion we could create each potential version we need from
> the get go to avoid duplicating the same function.
> But that runs the risk of generating far more code than what is actually
> used.
>
> Last but not least GHC currently doesn't always load unfoldings. In
> particular if you compile a module with optimizations disabled the RHS is
> currently *not*
> available to the compiler when looking at the use site. There already is a
> mechanism to bypass this in GHC where a function *must* be inlined
> (compulsory unfoldings).
> But it's currently reserved for built in functions. We could just make
> INLINE bindings compulsory if they have levity-polymorphic arguments sure.
> But again it's not clear
> this is really desireable.
>
> I don't think this has to mean we couldn't change how things work to
> accomodate levity-polymorphic arguments. It just seems it's unclear what a
> good design
> would look like and if it's worth having.
>
> Cheers
> Andreas
> Am 08/10/2021 um 01:36 schrieb Clinton Mead:
>
> Hi All
>
> Not sure if this belongs in ghc-users or ghc-devs, but it seemed devy
> enough to put it here.
>
> Section 6.4.12.1
> <https://downloads.haskell.org/~ghc/9.0.1/docs/html/users_guide/exts/levity_polymorphism.html>
> of the GHC user manual points out, if we allowed levity polymorphic
> arguments, then we would have no way to compile these functions, because
> the code required for different levites is different.
>
> However, if such a function is {-# INLINE #-} or {-# INLINABLE #-}
> there's no need to compile it as it's full definition is in the interface
> file. Callers can just compile it themselves with the levity they require.
> Indeed callers of inline functions already compile their own versions even
> without levity polymorphism (for example, presumably inlining function
> calls that are known at compile time).
>
> The only sticking point to this that I could find was that GHC will only
> inline the function if it is fully applied
> <https://downloads.haskell.org/ghc/9.0.1/docs/html/users_guide/exts/pragmas.html#inline-pragma>,
> which suggests that the possibility of partial application means we can't
> inline and hence need a compiled version of the code. But this seems like a
> silly restriction, as we have the full RHS of the definition in the
> interface file. The caller can easily create and compile it's own partially
> applied version. It should be able to do this regardless of levity.
>
> It seems to me we're okay as long as the following three things aren't
> true simultaneously:
>
> 1. Blah has levity polymorphic arguments
> 2. Blah is exported
> 3. Blah is not inline
>
> If a function "Blah" is not exported, we shouldn't care about levity
> polymorphic arguments, because we have it's RHS on hand in the current
> module and compile it as appropriate. And if it's inline, we're exposing
> it's full RHS to other callers so we're still fine also. Only when these
> three conditions combine should we give an error, say like:
>
> "Blah has levity polymorphic arguments, is exported, and is not inline.
> Please either remove levity polymorphic arguments, not export it or add an  
> {-#
> INLINE #-} or {-# INLINABLE #-} pragma.
>
> I presume however there are some added complications that I don't
> understand, and I'm very interested in what they are as I presume they'll
> be quite interesting.
>
> Thanks,
> Clinton
>
>

Why can't arguments be levity polymorphic for inline functions?

2021-10-07 Thread Clinton Mead
Hi All

Not sure if this belongs in ghc-users or ghc-devs, but it seemed devy
enough to put it here.

Section 6.4.12.1

of the GHC user manual points out, if we allowed levity polymorphic
arguments, then we would have no way to compile these functions, because
the code required for different levites is different.

However, if such a function is {-# INLINE #-} or {-# INLINABLE #-} there's
no need to compile it as it's full definition is in the interface file.
Callers can just compile it themselves with the levity they require. Indeed
callers of inline functions already compile their own versions even without
levity polymorphism (for example, presumably inlining function calls that
are known at compile time).

The only sticking point to this that I could find was that GHC will only
inline the function if it is fully applied
,
which suggests that the possibility of partial application means we can't
inline and hence need a compiled version of the code. But this seems like a
silly restriction, as we have the full RHS of the definition in the
interface file. The caller can easily create and compile it's own partially
applied version. It should be able to do this regardless of levity.

It seems to me we're okay as long as the following three things aren't true
simultaneously:

1. Blah has levity polymorphic arguments
2. Blah is exported
3. Blah is not inline

If a function "Blah" is not exported, we shouldn't care about levity
polymorphic arguments, because we have it's RHS on hand in the current
module and compile it as appropriate. And if it's inline, we're exposing
it's full RHS to other callers so we're still fine also. Only when these
three conditions combine should we give an error, say like:

"Blah has levity polymorphic arguments, is exported, and is not inline.
Please either remove levity polymorphic arguments, not export it or add an  {-#
INLINE #-} or {-# INLINABLE #-} pragma.

I presume however there are some added complications that I don't
understand, and I'm very interested in what they are as I presume they'll
be quite interesting.

Thanks,
Clinton
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Options for targeting Windows XP?

2021-03-25 Thread Clinton Mead
Thanks again for the detailed reply Ben.

I guess the other dream of mine is to give GHC a .NET backend. For my
problem it would be the ideal solution, but it looks like other attempts in
this regard (e.g. Eta, GHCJS etc) seem to have difficulty keeping up with
updates to GHC. So I'm sure it's not trivial.

It would be quite lovely though if I could generate .NET + Java + even
Python bytecode from GHC.

Whilst not solving my immediate problem, perhaps my efforts are best spent
in giving GHC a plugin architecture for backends (or if one already
exists?) trying to make a .NET backend.

I believe "Csaba Hruska" is working in this space with GRIN, yes?

I read SPJs paper on Implementing Lazy Functional Languages on Stock
Hardware: The Spineless Tagless G-machine
<https://www.microsoft.com/en-us/research/publication/implementing-lazy-functional-languages-on-stock-hardware-the-spineless-tagless-g-machine/>
which
implemented STG in C and whilst it wasn't trivial, it didn't seem
stupendously complex (even I managed to roughly follow it). I thought to
myself also, implementing this in .NET would be even easier because I can
hand off garbage collection to the .NET runtime so there's one less thing
to worry about. I also, initially, don't care _too_ much about performance.

Of course, there's probably a whole bunch of nuance. One actually needs to,
for example, represent all the complexities of GADTs into object orientated
classes, maybe converting sum types to inheritance hierarchies with Visitor
Patterns. And also you'd actually have to make sure to do one's best to
ensure exposed Haskell functions look like something sensible.

So I guess, given I have a bit of an interest here, what would be the best
approach if I wanted to help GHC develop more backends and into an
architecture where people can add backends without forking GHC? Where could
I start helping that effort? Should I contact "Csaba Hruska" and get
involved in GRIN? Or is there something that I can start working on in GHC
proper?

Considering that I've been playing around with Haskell since 2002, and I'd
like to actually get paid to write it at some point in my career, and I
have an interest in this area, perhaps this is a good place to start, and
actually helping to develop a pluggable backend architecture for GHC may be
more useful for more people over the long term than trying to hack up an
existing GHC to support 32 bit Windows XP, a battle I suspect will have to
be refought every time a new GHC version is released given the current
structure of GHC.

On Fri, Mar 26, 2021 at 1:34 PM Ben Gamari  wrote:

> Clinton Mead  writes:
>
> > Thanks all for your replies. Just going through what Ben has said step by
> > step:
> >
> > My sense is that if you don't need the threaded runtime system it would
> >> probably be easiest to just try to make a modern GHC run on Windows XP.
> >>
> >
> > Happy to run non-threaded runtime. A good chunk of these machines will be
> > single or dual core anyway.
> >
> That indeed somewhat simplifies things.
>
> >> As Tamar suggested, it likely not easy, but also not impossible. WinIO
> >> is indeed problematic, but thankfully the old MIO IO manager is still
> >> around (and will be in 9.2).
> >>
> >
> > "Is still around"? As in it's in the code base and just dead code, or
> can I
> > trigger GHC to use the old IO manager with a GHC option?
> >
> > The possible reasons for Windows XP incompatibility that I can think of
> >> off the top of my head are:
> >>
> >>  * Timers (we now use QueryPerformanceCounter)
> >>
> >
> > This page suggests that QueryPerformanceCounter
> > <
> https://docs.microsoft.com/en-us/windows/win32/api/profileapi/nf-profileapi-queryperformancecounter
> >
> > should
> > run on XP. Is this incorrect?
> >
> It's supported, but there are caveats [1] that make it unreliable as a
> timesource.
>
> [1]
> https://docs.microsoft.com/en-us/windows/win32/sysinfo/acquiring-high-resolution-time-stamps#windowsxp-and-windows2000
> >
> >>  * Big-PE support, which is very much necessary for profiled builds
> >>
> >
> > I don't really need profiled builds
> >
>
> Alright, then you *probably* won't be affected by PE's symbol limit.
>
> >>  * Long file path support (mostly a build-time consideration as Haskell
> >>build systems tend to produce very long paths)
> >>
> >>
> > I don't need to build on Windows XP either. I just need to run on Windows
> > XP so hopefully this won't be an issue. Although if GHC was modified for
> > long file path support so it could build itself with long file path
> support
> > presumably it will 

Re: Options for targeting Windows XP?

2021-03-25 Thread Clinton Mead
Another gotcha that I didn't think of. The machines I'm targeting often
have 32 bit versions of Windows, which it looks like isn't supported after
GHC 8.6.

Does this move it into the too hard basket?
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Options for targeting Windows XP?

2021-03-24 Thread Clinton Mead
Thanks all for your replies. Just going through what Ben has said step by
step:

My sense is that if you don't need the threaded runtime system it would
> probably be easiest to just try to make a modern GHC run on Windows XP.
>

Happy to run non-threaded runtime. A good chunk of these machines will be
single or dual core anyway.


> As Tamar suggested, it likely not easy, but also not impossible. WinIO
> is indeed problematic, but thankfully the old MIO IO manager is still
> around (and will be in 9.2).
>

"Is still around"? As in it's in the code base and just dead code, or can I
trigger GHC to use the old IO manager with a GHC option?

The possible reasons for Windows XP incompatibility that I can think of
> off the top of my head are:
>
>  * Timers (we now use QueryPerformanceCounter)
>

This page suggests that QueryPerformanceCounter

should
run on XP. Is this incorrect?


>  * Big-PE support, which is very much necessary for profiled builds
>

I don't really need profiled builds


>  * Long file path support (mostly a build-time consideration as Haskell
>build systems tend to produce very long paths)
>
>
I don't need to build on Windows XP either. I just need to run on Windows
XP so hopefully this won't be an issue. Although if GHC was modified for
long file path support so it could build itself with long file path support
presumably it will affect everything else it builds also.


> There may be others, but I would start looking there. I am happy to
> answer any questions that might arise.
>
>
I'm guessing the way forward here might be a patch with two options:

1. -no-long-path-support/-long-path-support (default -long-path-support)
2. -winxp

The winxp option shall:

- Require -no-long-path-support
- Conflicts with -threaded
- Conflicts with profiled builds
- Uses the old IO manager (I'm not sure if this is an option or how this is
done).

What do you think (roughly speaking)?
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Options for targeting Windows XP?

2021-03-24 Thread Clinton Mead
I'm currently trying to bring my company around to using a bit of Haskell.
One issue is that a number of our clients are based in South East Asia and
need software that runs on Windows XP.

Unfortunately it seems the last version of GHC that produces executables
that run on Windows XP is GHC 7.10. Whilst this table
 suggests the
issue may only running GHC 8.0+ on Windows XP, I've confirmed that GHC 8.0
executables (even "Hello World") will not run on Windows XP, presumably
because a non-XP WinAPI call in the runtime.

My first thought would be to restrict myself to GHC 7.10 features (i.e.
2015). This would be a slight annoyance but GHC 7.10 still presents a
reasonable language. But my concern would be that increasingly I'll run
into issues with libraries that use extensions post GHC 7.10, particularly
libraries with large dependency lists.

So there's a few options I've considered at this point:

1. Use GHCJS to compile to Javascript, and then dig out a version of NodeJS
that runs on Windows XP. GHCJS seems to at least have a compiler based on
GHC 8.6.
2. Patch GHC with an additional command line argument to produce XP/Vista
compatible executables, perhaps by looking at the changes between 7.10 ->
8.0, and re-introducing the XP approach as an option.

The issue with 1 is that is that as well as being limited by how up to date
GHCJS is, this will increase install size, memory usage and decrease
performance on Windows XP machines, which are often in our environments
quite old and resource and memory constrained.

Approach 2 is something I'd be willing to put some work into if it was
practical, but my thought is that XP support was removed for a reason,
presumably by using newer WinAPI functions simplified things significantly.
By re-adding in XP support I'd be complicating GHC once again, and GHC will
effectively have to maintain two approaches. In addition, in the long term,
whenever a new WinAPI call is added one would now have to check whether
it's available in Windows XP, and if it's not produce a Windows XP
equivalent. That might seem like just an extra burden of support for
already busy GHC developers. But on the other hand, if the GHC devs would
be happy to merge a patch and keep up XP support this would be the cleanest
option.

But then I had a thought. If GHC Core isn't supposed to change much between
versions is it? Which made me come up with these approaches:

3. Hack up a script to compile programs using GHC 9 to Core, then feed that
Core output into GHC 7.10. OR
4. Produce a chimera style GHC by importing the GHC 9.0 API and the GHC
7.10 API, and making a version of GHC that does Haskell -> Core in GHC 9.0
and the rest of the code generation in GHC 7.10.

One issue with 4 will be that presumably that because I'm importing GHC 9.0
API and the 7.10 API separately, all their data types will technically be
separate, so I'll need to basically deep copy the GHC 9.0 core datatype
(and perhaps others) to GHC 7.10 datatypes. But presuming their largely
similar this should be fairly mechanical.

So are any of these approaches (well, particularly 2 and 4) reasonable? Or
am I going to run into big problems with either of them? Is there another
approach I haven't thought of?

Thanks,
Clinton
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Container type classes

2019-05-30 Thread Clinton Mead
I'm not sure if this is related but the package Map-Classes

provides
about 50 functions on around a dozen key/value like datatypes e.g. Arrays,
Maps, Sets (value is ()) etc. Even ByteStrings are included (Int -> Word8
mapping).

You should be able to fairly easily add new types and even new functions to
the instances if you give them default implementations.

On Fri, May 31, 2019 at 9:23 AM Andrey Mokhov 
wrote:

> Thanks again Iavor,
>
> Despite the type inference issue, and the fact that this requires a
> separate type class, this is the best solution I've seen so far.
>
> Cheers,
> Andrey
>
> -Original Message-
> From: Iavor Diatchki [mailto:iavor.diatc...@gmail.com]
> Sent: 30 May 2019 23:16
> To: Andrey Mokhov 
> Cc: Brandon Allbery ; Andreas Klebinger <
> klebinger.andr...@gmx.at>; ghc-devs@haskell.org
> Subject: Re: Container type classes
>
> Yeah, there is really no relation between the two parameters of `Fun`,
> so you'd have to specify the intermediate type manually. For example:
>
> add3 :: forall s. (Fun s s, Elem s ~ Int) => s -> s
> add3 = colMap @s (+1) . colMap (+2)
>
> I wouldn't say that it's a particularly convenient interface to work
> with, unless you are working in a setting where most of the containers
> have known types.
>
>
> On Thu, May 30, 2019 at 2:58 PM Andrey Mokhov
>  wrote:
> >
> > Many thanks Iavor,
> >
> > This looks very promising! I played with your encoding a little, but
> quickly came across type inference issues. The following doesn't compile:
> >
> > add3 :: (Fun s s, Elem s ~ Int) => s -> s
> > add3 = colMap (+1) . colMap (+2)
> >
> > I'm getting:
> >
> > * Could not deduce: Elem a0 ~ Int
> >   from the context: (Fun s s, Elem s ~ Int)
> > bound by the type signature for:
> >add3 :: forall s. (Fun s s, Elem s ~ Int) => s -> s
> >   Expected type: Elem a0 -> Elem s
> > Actual type: Int -> Int
> >   The type variable `a0' is ambiguous
> >
> > Fun s s is supposed to say that the intermediate type is `s` too, but I
> guess this is not how type class resolution works.
> >
> > Cheers,
> > Andrey
> >
> > -Original Message-
> > From: Iavor Diatchki [mailto:iavor.diatc...@gmail.com]
> > Sent: 30 May 2019 22:38
> > To: Brandon Allbery 
> > Cc: Andrey Mokhov ; Andreas Klebinger <
> klebinger.andr...@gmx.at>; ghc-devs@haskell.org
> > Subject: Re: Container type classes
> >
> > This is how you could define `map`.  This is just for fun, and to
> > discuss Haskell idioms---I am not suggesting we should do it.  Of
> > course, it might be a bit more general than what you'd like---for
> > example it allows defining instances like `Fun IntSet (Set Int)` that,
> > perhaps?, you'd like to disallow:
> >
> > {-# LANGUAGE MultiParamTypeClasses, TypeFamilies #-}
> >
> > import Data.Set (Set)
> > import qualified Data.Set as Set
> > import Data.IntSet (IntSet)
> > import qualified Data.IntSet as ISet
> >
> > class Col t where
> >   type Elem t
> >   -- ... As in Andreas's example
> >
> > class (Col a, Col b) => Fun a b where
> >   colMap :: (Elem a -> Elem b) -> a -> b
> >
> > instance Col (Set a) where
> >   type Elem (Set a) = a
> >
> > instance Col IntSet where
> >   type Elem IntSet = Int
> >
> > instance Fun IntSet IntSet where
> >   colMap = ISet.map
> >
> > instance Ord b => Fun (Set a) (Set b) where
> >   colMap = Set.map
> >
> > On Thu, May 30, 2019 at 2:32 PM Brandon Allbery 
> wrote:
> > >
> > > They can, with more work. You want indexed monads, so you can describe
> types that have e.g. an ordering constraint as well as the Monad constraint.
> > >
> > > On Thu, May 30, 2019 at 5:26 PM Andrey Mokhov <
> andrey.mok...@newcastle.ac.uk> wrote:
> > >>
> > >> Hi Artem,
> > >>
> > >>
> > >>
> > >> Thanks for the pointer, but this doesn’t seem to be a solution to my
> challenge: they simply give up on overloading `map` for both Set and
> IntSet. As a result, we can’t write polymorphic functions over Set and
> IntSet if they involve any mapping.
> > >>
> > >>
> > >>
> > >> I looked at the prototype by Andreas Klebinger, and it doesn’t
> include the method `setMap` either.
> > >>
> > >>
> > >>
> > >> Perhaps, Haskell’s type classes just can’t cope with this problem.
> > >>
> > >>
> > >>
> > >> *ducks for cover*
> > >>
> > >>
> > >>
> > >> Cheers,
> > >>
> > >> Andrey
> > >>
> > >>
> > >>
> > >> From: Artem Pelenitsyn [mailto:a.pelenit...@gmail.com]
> > >> Sent: 30 May 2019 20:56
> > >> To: Andrey Mokhov 
> > >> Cc: ghc-devs@haskell.org; Andreas Klebinger  >
> > >> Subject: Re: Container type classes
> > >>
> > >>
> > >>
> > >> Hi Andrey,
> > >>
> > >>
> > >>
> > >> FWIW, mono-traversable (
> http://hackage.haskell.org/package/mono-traversable) suggests decoupling
> IsSet and Funtor-like.
> > >>
> > >>
> > >>
> > >> In a nutshell, they define the IsSet class (in Data.Containers) with
> typical set operations like member and singleton, 

Re: MR does not merge

2019-01-20 Thread Clinton Mead
Hi All

I'm not a GHC dev so my understanding of this process is limited to this
thread but just my thoughts.

My understanding is that we want to achieve the following two goals:

1. Never allow code which breaks tests to be committed to master.
2. Ensure that master is up to date as soon as possible with recently
submitted merge requests (MR).

The issue seems to be that the only way to ensure 1 is to use a serial
"rebase/test/make master branch" process on every MR. Which means if you
get a lot of MRs in a row you can get a queue of MRs blowing out.

So what I propose is the following:

1. Keep a queue of pending MRs.
2. When the previous test is complete, create a branch (lets call it
"pending") which is all the MRs in the queue rebased firstly on master and
then each other. Drop any MRs which fail this rebasing.
3. Run tests against "pending"
4. If the tests pass, "pending" becomes "master". However, if the CI for
"pending" fails, "split" pending into two (half the MRs in each, perhaps
interleaving their size also), rebase them separately on master call them
"pending1" and "pending2". If there's only one MR pending, don't "split" it
(you can't), just report the test failure to the MR owner.
5. If either "pending1" or "pending2" passes, it becomes "master". Also,
whether either or both of "pending1" or "pending2" fails, go back to step 4
for these. If they both pass (which probably should never happen) maybe
just merge one into master arbitrarily and put the other MRs in the pending
MR queue.
6. Once we've merged all our MRs in to master (and perhaps through the
binary search above found the broken MR) start this process again with the
current pending MRs.

With this process we ensure master is never broken, but we can test and
merge n MRs in log(n) time, so the MR queue will not grow arbitrarily long
if the rate of submitted MRs exceeds the rate we run CI tests on them.

"Marge-bot" mentioned almost does what I suggest, except in the case of a
failure it runs the MRs one-by-one, instead of binary split like I suggest.
Perhaps my proposal could be best implemented as a patch to Marge-bot.


On Sat, Jan 19, 2019 at 2:42 AM Ben Gamari  wrote:

> Simon Peyton Jones via ghc-devs  writes:
>
> > |  Indeed this is a known issue that I have been working [1] with
> upstream
> > |  to resolve.
> >
> > Thanks. I'm not equipped to express a well-informed opinion about what
> > the best thing to do is. But in the meantime I WOULD be grateful for
> > explicit workflow advice. Specifically:
> >
> > * What steps should I take to get a patch committed to master,
> >   assuming I've done the review stuff and want to press "go"?
> >
> At the moment it's largely just a matter of when a bulk merge happens; I
> did a large merge on Wednesday and another yesterday.
>
> However, as Matthew suggested I think it may make sense to try using
> Marge bot to eliminate this manual process with little cost. It doesn't
> take particularly long to put together a bulk merge but it does require
> some form of human intervention which generally implies latency.
>
> Cheers,
>
> - Ben
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs