Re: Coercible questions

2019-10-05 Thread Michal Terepeta
Adding +ghc-devs  to continue the thread

Hi Sandy,

Thanks for the answer! Do you think there is some fundamental reason for
this? Or just a matter of implementing this in GHC? It seems to me that
this should work just fine as long as the runtime representation is the
same.

And a related question--is it safe to `unsafeCoerce` an `Int` to a `Word`?
The only reason for why this could be problematic that comes to my mind is
that there could be an assumption that different `data`s do not alias each
other (although `newtype`s can due to `Coercible` functionality). But I'm
not sure this is ever used by GHC? Are there any other reasons why this
could be problematic?

Thanks!

- Michal



On Sat, Oct 5, 2019 at 5:27 PM Sandy Maguire  wrote:

> Hi Michal,
>
> Datas aren't coercible, only newtypes. This is why you can't coerce Ints
> and Words, and why Foo and Bar don't work.
>
> Sandy
>
> On Sat, Oct 5, 2019 at 4:17 PM Michal Terepeta 
> wrote:
>
>> Hi,
>>
>> I've started looking into using `Data.Coerce` (and the `Coercible`
>> type-class) for a personal project and was wondering why coercing between
>> `Int` and `Word` is not allowed? I don't see any fundamental reason why
>> this shouldn't work...
>>
>> Perhaps, it's just a matter of GHC's implementation details leaking out?
>> IIRC internally GHC has separate `RuntimeRep`/`PrimRep` for a `Word#` and
>> for an `Int#`. If that's the case, would it make sense to unify these?
>> Their actual runtime representation should be the same and I'd expect most
>> (all?) of their differences should be attached to `PrimOp`s.
>>
>> And that leads me to another question--what exactly goes wrong here:
>> ```
>> data Foo = Foo Int#
>> data Bar = Bar Int#
>>
>> test :: Bar
>> test = coerce (Foo 42#)
>> ```
>> Which fails with: "Couldn't match representation of type ‘Foo’ with that
>> of ‘Bar’ arising from a use of ‘coerce’"
>>
>> Perhaps I'm just misunderstanding exactly how `Coercible` works?
>>
>> Thanks in advance!
>>
>> - Michal
>>
>> PS. The ability to coerce through things like lists is amazing :)
>> ___
>> ghc-devs mailing list
>> ghc-devs@haskell.org
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>>
>
>
> --
> I'm currently travelling the world, sleeping on people's couches and doing
> full-time collaboration on Haskell projects. If this seems interesting to
> you, please consider signing up as a host!
> https://isovector.github.io/erdos/
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Coercible questions

2019-10-05 Thread Michal Terepeta
Hi,

I've started looking into using `Data.Coerce` (and the `Coercible`
type-class) for a personal project and was wondering why coercing between
`Int` and `Word` is not allowed? I don't see any fundamental reason why
this shouldn't work...

Perhaps, it's just a matter of GHC's implementation details leaking out?
IIRC internally GHC has separate `RuntimeRep`/`PrimRep` for a `Word#` and
for an `Int#`. If that's the case, would it make sense to unify these?
Their actual runtime representation should be the same and I'd expect most
(all?) of their differences should be attached to `PrimOp`s.

And that leads me to another question--what exactly goes wrong here:
```
data Foo = Foo Int#
data Bar = Bar Int#

test :: Bar
test = coerce (Foo 42#)
```
Which fails with: "Couldn't match representation of type ‘Foo’ with that of
‘Bar’ arising from a use of ‘coerce’"

Perhaps I'm just misunderstanding exactly how `Coercible` works?

Thanks in advance!

- Michal

PS. The ability to coerce through things like lists is amazing :)
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: [GHC DevOps Group] The future of Phabricator

2018-11-01 Thread Michal Terepeta
Hope you don't mind if I add an opinion of a small/occasional
contributor to the thread.

Personally, I would prefer a move to GitHub. Mostly due to familiarity
and network effect (pretty much everyone is on GitHub).

But I would also consider a move to GitLab a big improvement over the
current Phab-based setup.  A git-based workflow would be great - I use
arc/Phab too rarely to really invest in learning them better. (I just
figured out the simplest way to use them that seems to work and I'm
sticking to it :) I haven't actually used GitLab before, but it seems
super easy to sign in using GitHub credentials and the interface seems
quite familiar.

One thing that was already mentioned is the ticket handling and I just
wanted to say "+1". I *really* dislike Trac - it's slow, unintuitive and
every time I use it I need to spend a couple of minutes to find the
guide to its own weird version of markdown... So a better place for
tickets that's tightly integrated with code code hosting/review tools
would be really cool! Which brings an interesting aspect of this
discussion - if I had to choose between "GitHub for code hosting/review
& Trac for tickets" vs "GitLab for everything", I'd prefer the latter.

- Michal


On Tue, Oct 30, 2018 at 10:51 PM Boespflug, Mathieu  wrote:

> Hi Ben,
>
> On Tue, 30 Oct 2018 at 18:47, Ben Gamari  wrote:
> >
> > ...
> >
> > It occurs to me that I never did sit down to write up my thoughts on
> > reviewable. I tried doing a few reviews with it [1] and indeed it is
> > quite good; in many ways it is comparable to Differential. [...]
> > However, it really feels like a band-aid, introducing another layer of
> > indirection and a distinct conversation venue all to make up for what
> > are plain deficiencies in GitHub's core product.
>
> Sure. That sounds fine to me though, or indeed no different than say,
> using GitHub XOR Gitlab for code hosting, Phabricator for review (and
> only for that), and Trac for tickets (not ideal but no worse than
> status quo). If Phabricator (the paid for hosted version) or
> Reviewable.io really are the superior review tools, and if as review
> tools they integrate seamlessy with GitHub (or Gitlab), then that's an
> option worth considering.
>
> The important things are: reducing the maintenance burden (by
> preferring hosted solutions) while still meeting developer
> requirements and supporting a workflow that is familiar to most.
>
> > > So keeping the review UX issues aside for a moment, are there other
> > > GitHub limitations that you anticipate would warrant automation bots à
> > > la Rust-lang?
> > >
> > Ultimately Rust's tools all exist for a reason. Bors works around
> > GitHub's lacking ability to merge-on-CI-pass, Highfive addresses the
> > lack of a flexible code owner notification system, among other things.
> > Both of these are features that we either have already or would like to
> > have.
>
> ... and I assume based on your positive assessment, are both
> out-of-the-box features of Gitlab that meet the requirements?
>
> > On the whole, I simply see very few advantages to using GitHub over
> > GitLab; the latter simply seems to me to be a generally superior product.
>
> That may well be the case. The main argument for GitHub is taking
> advantage of its network effect. But a big part of that is not having
> to manage a new set of credentials elsewhere, as well as remembering
> different user names for the same collaborators on different
> platforms. You're saying I can use my GitHub credentials to
> authenticate on Gitlab. So in the end we possibly wouldn't be losing
> much of that network effect.
>
> > > I'm not too worried about the CI story. The hard part with CircleCI
> > > isn't CircleCI, it's getting to a green CircleCI. But once we're
> > > there, moving to a green OtherCI shouldn't be much work.
> > >
> > Right, and we are largely already there!
>
> That's great to hear.
> ___
> Ghc-devops-group mailing list
> ghc-devops-gr...@haskell.org
> https://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devops-group
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: understanding assertions, part deux :) Re: whither built in unlifted Word32# / Word64# etc?

2018-08-01 Thread Michal Terepeta
I've rebased the diff and relaxed the assertion - do take a look if that
looks reasonable to you :)
https://phabricator.haskell.org/D4475

Cheers!

- Michal

On Wed, Jul 25, 2018 at 9:03 PM Michal Terepeta 
wrote:

> Hi Carter,
>
> I didn't write this assertion. I only validated locally (IIRC at the time
> I uploaded the diff, harbormaster was failing for some other reasons).
>
> I'll try to have a look at it this weekend.
>
> Cheers!
>
> - Michal
>
> On Wed, Jul 25, 2018 at 2:16 AM Carter Schonwald <
> carter.schonw...@gmail.com> wrote:
>
>> Michal: did you write this Assert about width? and if so could you
>> explain it so we can understand?
>>
>> hrmm... that code is in the procedure for generating C calls for 64bit
>> intel systems
>>
>> https://github.com/michalt/ghc/blob/int8/compiler/nativeGen/X86/CodeGen.hs#L2541
>> is the  top of that routine
>>
>> and width is defined right below the spot in question
>>
>> https://github.com/michalt/ghc/blob/int8/compiler/nativeGen/X86/CodeGen.hs#L2764-L2774
>>
>> it seems like the code/assertion likely predates michal's work? (eg, a
>> trick to make sure that genCCall64 doesn't get invoked by 32bit platforms?)
>>
>> On Mon, Jul 23, 2018 at 8:54 PM Abhiroop Sarkar 
>> wrote:
>>
>>> Hi Michal,
>>>
>>> In the tests that you have added to D4475, are all the tests running
>>> fine?
>>>
>>> On my machine, I was running the FFI tests(
>>> https://github.com/michalt/ghc/blob/int8/testsuite/tests/ffi/should_run/PrimFFIInt8.hs)
>>> and they seem to fail at a particular assert statement in the code
>>> generator.
>>>
>>> To be precise this one:
>>> https://github.com/michalt/ghc/blob/int8/compiler/nativeGen/X86/CodeGen.hs#L2764
>>>
>>> Upon commenting that assert the tests run fine. Am I missing something
>>> or is the failure expected?
>>>
>>> Thanks,
>>> Abhiroop
>>>
>>> On Mon, Jul 9, 2018 at 8:31 PM Michal Terepeta <
>>> michal.terep...@gmail.com> wrote:
>>>
>>>> Just for the record, I've uploaded the changes to binary:
>>>> https://github.com/michalt/packages-binary/tree/int8
>>>>
>>>> - Michal
>>>>
>>>> On Wed, Jul 4, 2018 at 11:07 AM Michal Terepeta <
>>>> michal.terep...@gmail.com> wrote:
>>>>
>>>>> Yeah, if you look at the linked diff, there are a few tiny changes to
>>>>> binary that are necessary.
>>>>>
>>>>> I'll try to upload them to github later this week.
>>>>>
>>>>> Ben, is something blocking the review of the diff? I think I addressed
>>>>> all comments so far.
>>>>>
>>>>> - Michal
>>>>>
>>>>> On Wed, Jul 4, 2018 at 1:38 AM Abhiroop Sarkar 
>>>>> wrote:
>>>>>
>>>>>> Hello Michal,
>>>>>>
>>>>>> I was looking at your diff https://phabricator.haskell.org/D4475
>>>>>> and there seems to be some changes that you perhaps made in the binary
>>>>>> package (
>>>>>> https://phabricator.haskell.org/differential/changeset/?ref=199152=ignore-most).
>>>>>> I could not find your version of binary on your github repos list. Is it
>>>>>> possible for you to upload that so I can pull those changes?
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>> Abhiroop
>>>>>>
>>>>>> On Mon, May 28, 2018 at 10:45 PM Carter Schonwald <
>>>>>> carter.schonw...@gmail.com> wrote:
>>>>>>
>>>>>>> Abhiroop has the gist, though the size of word args for simd is more
>>>>>>> something we want to be consistent between 32/64 bit modes aka ghc
>>>>>>> targeting 32 or 64 bit intel with simd support :).  It would be best if 
>>>>>>> the
>>>>>>> unpack and pack operations that map to and from unboxed tuples where
>>>>>>> consistent and precise type wise.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> -Carter
>>>>>>>
>>>>>>> On May 28, 2018, at 5:02 PM, Abhiroop Sarkar 
>>>>>>> wrote:
>>>>>>>
>>>>>>> Hello Michal,
>>>>>>>
>>>>>>> My understa

Re: understanding assertions, part deux :) Re: whither built in unlifted Word32# / Word64# etc?

2018-07-25 Thread Michal Terepeta
Hi Carter,

I didn't write this assertion. I only validated locally (IIRC at the time I
uploaded the diff, harbormaster was failing for some other reasons).

I'll try to have a look at it this weekend.

Cheers!

- Michal

On Wed, Jul 25, 2018 at 2:16 AM Carter Schonwald 
wrote:

> Michal: did you write this Assert about width? and if so could you explain
> it so we can understand?
>
> hrmm... that code is in the procedure for generating C calls for 64bit
> intel systems
>
> https://github.com/michalt/ghc/blob/int8/compiler/nativeGen/X86/CodeGen.hs#L2541
> is the  top of that routine
>
> and width is defined right below the spot in question
>
> https://github.com/michalt/ghc/blob/int8/compiler/nativeGen/X86/CodeGen.hs#L2764-L2774
>
> it seems like the code/assertion likely predates michal's work? (eg, a
> trick to make sure that genCCall64 doesn't get invoked by 32bit platforms?)
>
> On Mon, Jul 23, 2018 at 8:54 PM Abhiroop Sarkar 
> wrote:
>
>> Hi Michal,
>>
>> In the tests that you have added to D4475, are all the tests running fine?
>>
>> On my machine, I was running the FFI tests(
>> https://github.com/michalt/ghc/blob/int8/testsuite/tests/ffi/should_run/PrimFFIInt8.hs)
>> and they seem to fail at a particular assert statement in the code
>> generator.
>>
>> To be precise this one:
>> https://github.com/michalt/ghc/blob/int8/compiler/nativeGen/X86/CodeGen.hs#L2764
>>
>> Upon commenting that assert the tests run fine. Am I missing something or
>> is the failure expected?
>>
>> Thanks,
>> Abhiroop
>>
>> On Mon, Jul 9, 2018 at 8:31 PM Michal Terepeta 
>> wrote:
>>
>>> Just for the record, I've uploaded the changes to binary:
>>> https://github.com/michalt/packages-binary/tree/int8
>>>
>>> - Michal
>>>
>>> On Wed, Jul 4, 2018 at 11:07 AM Michal Terepeta <
>>> michal.terep...@gmail.com> wrote:
>>>
>>>> Yeah, if you look at the linked diff, there are a few tiny changes to
>>>> binary that are necessary.
>>>>
>>>> I'll try to upload them to github later this week.
>>>>
>>>> Ben, is something blocking the review of the diff? I think I addressed
>>>> all comments so far.
>>>>
>>>> - Michal
>>>>
>>>> On Wed, Jul 4, 2018 at 1:38 AM Abhiroop Sarkar 
>>>> wrote:
>>>>
>>>>> Hello Michal,
>>>>>
>>>>> I was looking at your diff https://phabricator.haskell.org/D4475  and
>>>>> there seems to be some changes that you perhaps made in the binary 
>>>>> package (
>>>>> https://phabricator.haskell.org/differential/changeset/?ref=199152=ignore-most).
>>>>> I could not find your version of binary on your github repos list. Is it
>>>>> possible for you to upload that so I can pull those changes?
>>>>>
>>>>> Thanks
>>>>>
>>>>> Abhiroop
>>>>>
>>>>> On Mon, May 28, 2018 at 10:45 PM Carter Schonwald <
>>>>> carter.schonw...@gmail.com> wrote:
>>>>>
>>>>>> Abhiroop has the gist, though the size of word args for simd is more
>>>>>> something we want to be consistent between 32/64 bit modes aka ghc
>>>>>> targeting 32 or 64 bit intel with simd support :).  It would be best if 
>>>>>> the
>>>>>> unpack and pack operations that map to and from unboxed tuples where
>>>>>> consistent and precise type wise.
>>>>>>
>>>>>>
>>>>>>
>>>>>> -Carter
>>>>>>
>>>>>> On May 28, 2018, at 5:02 PM, Abhiroop Sarkar 
>>>>>> wrote:
>>>>>>
>>>>>> Hello Michal,
>>>>>>
>>>>>> My understanding of the issues are much lesser compared to Carter.
>>>>>> However, I will state whatever I understood from discussions with him. Be
>>>>>> warned my understanding might be wrong and Carter might be asking this 
>>>>>> for
>>>>>> some completely different reason.
>>>>>>
>>>>>> > Out of curiosity, why do you need Word64#/Word32# story to be fixed
>>>>>> for SIMD?
>>>>>>
>>>>>> One of the issues we are dealing with is multiple
>>>>>> microarchitectures(SSE, AVX, AVX2 etc). As a result different
>>>>>> microarchitect

Re: ZuriHac 2018 GHC DevOps track - Request for Contributions

2018-05-29 Thread Michal Terepeta
Hi Niklas,

Sorry for slow reply - I'm totally snowed under at the moment.

I should be able to give some overview/examples of what are primops and how
they go through the compilation pipeline. And talk a bit about the
Cmm-level parts of GHC. But I won't have much time to prepare, so there
might be fair amount of improvisation...

Are you coming to this week's HaskellerZ meetup? We could chat a bit more
about this.

Cheers!

- Michal

On Tue, May 22, 2018 at 12:07 PM Niklas Hambüchen  wrote:

> On 08/04/2018 15.01, Michal Terepeta wrote:
> > I'd be happy to help. :) I know a bit about the backend (e.g., cmm
> level), but it might be tricky to find there some smaller/self-contained
> projects that would fit ZuriHac.
>
> Hey Michal,
>
> that's great. Is there a topic you would like to give a talk about, or a
> pet peeve task that you'd like to tick off with the help of new potential
> contributors in a hacking session?
>
> Other topics that might be nice and that you might know about are "How do
> I add a new primop to GHC", handling all the way from the call on the
> Haskell side to emitting the code, or (if I remember that correctly)
> checking out that issue that GHC doesn't do certain optimisations yet (such
> as emitting less-than-full-word instructions e.g. for adding two Word8s, or
> lack of some strength reductions as in [1]).
>
> > You've mentioned performance regression tests - maybe we could also work
> on improving nofib?
>
> For sure!
> Shall we run a hacking session together where we let attendees work on
> both performance regression tests and nofib? It seems these two fit well
> together.
>
> Niklas
>
> [1]:
> https://stackoverflow.com/questions/23315001/maximizing-haskell-loop-performance-with-ghc/23322255#23322255
>
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Question about ArrayArray#

2018-05-03 Thread Michal Terepeta
On Thu, May 3, 2018 at 2:40 PM Carter Schonwald 
wrote:

> I think Ed’s structs package explicitly makes use of this :)
>

Oh, interesting! Thanks for the pointer!

Looking at Ed's code, he's seems to be doing something similar to that I'm
also interested in: having a SmallArray# that at one index points to
another SmallArray# and at another one to a ByteArray#.  (my use case
involves multiple small arrays, so I'd rather use SmallArray# than
ArrayArray#):
https://github.com/ekmett/structs/blob/master/src/Data/Struct/Internal.hs#L146

So I guess my second question becomes: is anyone aware of some rts/GC
invariants/expectations that would be broken by doing this? (ignoring the
issue of getting every `unsafeCoerce#` right :)

Thanks!

- Michal
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Question about ArrayArray#

2018-05-02 Thread Michal Terepeta
Hi all,

I have a quick question about ArrayArray#. Is it safe to store *both* an
ByteArray# and ArrayArray# within the *same* ArrayArray#? For instance:
- at index 0 of an ArrayArray# I store a different ArrayArray#,
- at index 1 of that same ArrayArray# I store a ByteArray#.

It seems to me that this should be safe/supported from the point of view of
the runtime system:
- both ArrayArray# and ByteArray# have the same kind/runtime representation,
- the arrays have a header that tells rts/GC what they are/how to handle
them.
(But I, as a user, would be responsible for using the right primop with the
right index to read them back)

Is this correct?

Thanks a lot!

- Michal
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: ZuriHac 2018 GHC DevOps track - Request for Contributions

2018-04-08 Thread Michal Terepeta
On Sat, Apr 7, 2018 at 3:34 PM Niklas Hambüchen  wrote:

> Hi GHC devs,
>
> The ZuriHac 2018 conference will feature a GHC DevOps track (which
> Andreas and I are coordinating), that will be all about fostering
> contributions to GHC and learning to hack it. There will be a room or
> two allocated at Zurihac for this purpose.
> [...]
> Please contact Andreas or me (on this list or privately) if you think
> you could help in any of these directions!
> If you're not sure, contact us anyway and tell us your idea!
>
> Best,
> Niklas and Andreas
> ZuriHac 2018 GHC DevOps track coordinators
>

Hi Niklas, Andreas,

I'd be happy to help. :) I know a bit about the backend (e.g., cmm level),
but it might be tricky to find there some smaller/self-contained projects
that would fit ZuriHac.
You've mentioned performance regression tests - maybe we could also work on
improving nofib?

- Michal
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: New slow validate errors

2018-04-08 Thread Michal Terepeta
Yes, sorry for that!

This is tracked in: https://ghc.haskell.org/trac/ghc/ticket/14989
Revert: https://phabricator.haskell.org/D4577

- Michal

On Sun, Apr 8, 2018 at 11:08 AM Ömer Sinan Ağacan 
wrote:

> Hi,
>
> I see a lot of these errors in slow validate using current GHC HEAD:
>
> ghc: panic! (the 'impossible' happened)
>   (GHC version 8.5.20180407 for x86_64-unknown-linux):
> Each block should be reachable from only one ProcPoint
>
> This wasn't happening ~10 days ago. I suspect it may be D4417 but I haven't
> checked.
>
> Ömer
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Is "cml_cont" of CmmCall used in practice?

2018-03-18 Thread Michal Terepeta
On Sun, Mar 18, 2018 at 6:38 AM Shao, Cheng  wrote:

> Hi all,
>
> Is the "cml_cont" field of the CmmCall variant is really used in practice?
> I traversed the output of raw Cmm produced by ghc compiling the whole base
> package, but the value of cml_cont is always Nothing.
>
> Regards,
> Shao Cheng
>


Hi,

I'm not a GHC expert, so please don't trust everything I say ;)

That being said, I think `cml_cont` is used a lot. If you look at the
`compiler/codeGen` directory (that's what turns STG to cmm), you'll
see that `MkGraph.mkCallReturnsTo` is called a few times. That's the
function that will construct a `CmmCall` with the continuation block.

When dumping cmm, you'll often see all those `returns to` notes. For
instance, compiling:

```
foo :: Int -> Int
foo x =
  case x of
42 -> 11
_  -> 00
```

results in:

```
   [...]
   c2cN: // global
   I64[Sp - 8] = c2cI;
   R1 = R2;
   Sp = Sp - 8;
   if (R1 & 7 != 0) goto c2cI; else goto c2cJ;

   // Evaluate the parameter.
   c2cJ: // global
   call (I64[R1])(R1) returns to c2cI, args: 8, res: 8, upd: 8;
   // ^^^
   // this specifies the continuation block
   // see also PprCmm.pprNode

   // Now check if it's 42.
   c2cI: // global
   if (I64[R1 + 7] == 42) goto c2cU; else goto c2cT;
   c2cU: // global
   [...]
```

As far as I understand it, this allows the code above to jump to the
`x` closure (to evalutae it), and have the closure jump back to the
continuation block (note that it's address is stored before we jump to
closure). AFAICS this particular code is created by
`StgCmmExpr.emitEnter`.

Hope this helps!

- Michal
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: [commit: ghc] master: Hoopl.Collections: change right folds to strict left folds (2974b2b)

2018-02-05 Thread Michal Terepeta
On Mon, Feb 5, 2018 at 12:19 PM Simon Peyton Jones 
wrote:

> Hi Michael
>
> Thanks for pushing forward with Hoopl and other back-end things.
>
> Did this patch elicit any performance gains?  Or what brought it to your
> attention?
>

I noticed this some time ago and just now got around to try it out. I was
hoping for some improvements, sadly the differences (if any) were too small
compared to noise. But it seemed like a nice change on its own, so I
decided to send it out.


> Do you have further plans for Hoopl and GHC's back end?
>

The biggest thing is probably:
https://github.com/ghc-proposals/ghc-proposals/pull/74
Other than that, I haven't made any plans yet ;) There are a few tickets
that I'd like to make some progress on. And I might try a few experiments
looking for some compile-time improvements in the Hoopl/backend - I think
currently nobody (including myself) is very keen on introducing new passes
or making existing ones more powerful due to compile-time constraints.
And then there are efforts by Moritz (LLVM backend using binary bitcode)
and Kavon (LLVM changes to support CPS-style calls) that sound really
interesting to me. (but they do require some more time to understand all
the context/changes, so I'm not sure if or how much I'll be able to help)

- Michal
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: New primitive types?

2017-09-26 Thread Michal Terepeta
On Sun, Aug 27, 2017 at 7:49 PM Michal Terepeta <michal.terep...@gmail.com>
wrote:

> > On Thu, Aug 3, 2017 at 2:28 AM Sylvain Henry <sylv...@haskus.fr> wrote:
> > Hi,
> >
> > I also think we should do this but it has a lot of ramifications:
> contant folding in Core, codegen, TH, etc.
> >
> > Also it will break codes that use primitive types directly, so maybe
> it's worth a ghc proposal.
>
> Ok, a short proposal sounds reasonable.
>

Just FYI: I've opened:
https://github.com/ghc-proposals/ghc-proposals/pull/74

Cheers,
Michal
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: New primitive types?

2017-08-27 Thread Michal Terepeta
> On Thu, Aug 3, 2017 at 2:28 AM Sylvain Henry  wrote:
> Hi,
>
> I also think we should do this but it has a lot of ramifications: contant
folding in Core, codegen, TH, etc.
>
> Also it will break codes that use primitive types directly, so maybe it's
worth a ghc proposal.

Ok, a short proposal sounds reasonable.

I don't think this would break a lot of code - based on a few searches
it seems that people don't really extract `Int#` from
`Int8/Int16/Int32` (similarly with words).
Or am I missing something?

Thanks,
Michal

PS. Sorry for slow reply - I was traveling.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: GHC release timing and future build infrastructure

2017-08-02 Thread Michal Terepeta
On Tue, Aug 1, 2017 at 4:19 AM Ben Gamari  wrote:

>
> Hello everyone,
>
> I just posted a pair of posts on the GHC blog [1,2] laying out some
> thoughts on the GHC release cycle timing [1] and how this relates to the
> in-progress Jenkins build infrastructure [2]. When you have a some time
> feel free to give them a read and comment (either here or on the Reddit
> thread [3]).
>
> Cheers,
>
> - Ben
>
>
> [1] https://ghc.haskell.org/trac/ghc/blog/2017-release-schedule
> [2] https://ghc.haskell.org/trac/ghc/blog/jenkins-ci
> [3]
> https://www.reddit.com/r/haskell/comments/6qt0iv/ghc_blog_reflections_on_ghcs_release_schedule/
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs



Hi Ben,

This sounds really cool! I'm pretty excited about more automation and
GHC releases more than once a year (maybe every 6 months?).

In my experience both automation and regular releases are a
significant win. They often result in more and quicker feedback, are
more motivating and make it easier to iterate/improve the project and
keep it in "releasable" state.

So thanks a lot for working on this! :)

Cheers,
Michal
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: New primitive types?

2017-08-02 Thread Michal Terepeta
On Tue, Aug 1, 2017 at 8:08 PM Carter Schonwald 
wrote:
> One issue with packed fields is that on many architectures you can't
quite do subword reads or
> writes.  So it might not always be a win.

Could you give any examples?

Note that we're still going to do aligned read/writes, i.e., `Int32#`
would still be 4 bytes aligned, `Int16#` 2 bytes, etc. So we might
have "holes", e.g., `data Foo = Foo Int8# Int64#` would still waste 7
bytes (since `Int64#` should be 8 bytes aligned).

In the future, I'd also like to do some field reordering to avoid some
holes like that in cases like `Foo Int32# Int64# Int32#` (here it'd be
better if the in-memory layout was `Int64#` first and then the two
`Int32#`s)

> There's also the issue that c-- as it exists in ghc doesn't have any
notion of subword sized
> types.
>
> That said, I do support making word/int64/32 # types more first class /
built in.  (I hit some
> issues which tie into this topic in the process of working on my still in
progress safeword
> package. )
>
> Point being: I support improving what we have, but it's got a bit of
surface area.  Please let me
> know how I can help you dig into this though

Have a look at https://ghc.haskell.org/trac/ghc/ticket/13825 which
tracks the progress. The most recent diff is
https://phabricator.haskell.org/D3809

So far, most of this work is based on some unfinished code by Simon
Marlow to support storing constructor fields smaller than words.  I'm
currently mostly finishing/fixing it and splitting to smaller pieces.
Introducing more primitive stuff is the next step. (assuming everyone
is ok with this :)

Cheers,
Michal
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


New primitive types?

2017-08-01 Thread Michal Terepeta
Hi all,

I'm working on making it possible to pack constructor fields [1],
example:

```
data Foo = Foo {-# UNPACK #-} !Float {-# UNPACK #-} !Int32
```

should only require 4 bytes for unpacked `Float` and 4 bytes for
unpacked `Int32`, which on 64-bit arch would take just 1 word (instead
of 2 it currently does).

The diff to support packing of fields is in review [2], but to really
take advantage of it I think we need to introduce new primitive types:
- Int{8,16,32}#
- Word{8,16,32}#
along with some corresponding primops and with some other follow-up
changes like extending `PrimRep`.

Then we could use them in definitions of `Int{8,16,32}` and
`Word{8,16,32}` (they're currently just wrapping `Int#` and `Word#`).

Does that sound ok with everyone? (just making sure that this makes
sense before I invest more time into this :)

Thanks,
Michal

[1] https://ghc.haskell.org/trac/ghc/ticket/13825
[2] https://phabricator.haskell.org/D3809
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: WordX/IntX wrap Word#/Int#?

2017-06-13 Thread Michal Terepeta
Just for the record, I've opened:
https://ghc.haskell.org/trac/ghc/ticket/13825
to track this.

Cheers,
Michal

On Mon, Jun 12, 2017 at 8:45 PM Michal Terepeta <michal.terep...@gmail.com>
wrote:

> Thanks a lot for the replies & links!
>
> I'll try to finish Simon's diff (and probably ask silly questions if I get
> stuck ;)
>
> Cheers,
> Michal
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: WordX/IntX wrap Word#/Int#?

2017-06-12 Thread Michal Terepeta
Thanks a lot for the replies & links!

I'll try to finish Simon's diff (and probably ask silly questions if I get
stuck ;)

Cheers,
Michal
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Removing Hoopl dependency?

2017-06-12 Thread Michal Terepeta
> On Mon, Jun 12, 2017 at 8:05 PM Ben Gamari  wrote:
> Simon Peyton Jones via ghc-devs  writes:
>
> Snip
> >
> > That would leave Sophie free to do (B) free of the constraints of GHC
> > depending on it; but we could always use it later.
> >
> > Does that sound plausible?  Do we know of any other Hoopl users?
>
> CCing Ning, who is currently maintaining hoopl and I believe has some
> projects using it.
>
> Ning, you may want to have a look through this thread if you haven't
> already seen it. You can find the previous messages in the list archive
[1].
>
> Cheers,
>
> - Ben

Based on [1] there are four public packages:
- ethereum-analyzer,
- linearscan-hoopl,
- llvm-analysis,
- text-show-instances

But there might be more that are not open-source/uploaded to
hackage/stackage.

Cheers,
Michal

[1] https://www.stackage.org/lts-8.18/package/hoopl-3.10.2.1
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


WordX/IntX wrap Word#/Int#?

2017-06-11 Thread Michal Terepeta
Hi all,

I've just noticed that all `WordX` (and `IntX`) data types are
actually implemented as wrappers around `Word#` (and `Int#`). This
probably doesn't matter much if it's stored on the heap (due to
pointer indirection and heap alignment), but it also means that:
```
data Foo = Foo {-# UNPACK #-} !Word8 {-# UNPACK #-} !Int8
```
will actually take *a lot* of space: on 64 bit we'd need 8 bytes for
header, 8 bytes for `Word8`, 8 bytes for `Int8`.

Is there any reason for this? The only thing I can see is that this
avoids having to add things like `Word8#` primitives into the
compiler. (also the codegen would need to emit zero-extend moves when
loading from memory, like `movzb{l,q}`)

If we had things like `Word8#` we could also consider changing `Bool`
to just wrap it (with the obvious encoding). Which would allow to both
UNPACK `Bool` *and* save the size within the struct. (alternatively
one could imagine a `Bool#` that would be just a byte)

I couldn't find any discussion about this, so any pointers would be
welcome. :)

Thanks,
Michal

PS.  I've had a look at it after reading about the recent
implementation of struct field reordering optimization in rustc:
http://camlorn.net/posts/April%202017/rust-struct-field-reordering.html
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Removing Hoopl dependency?

2017-06-09 Thread Michal Terepeta
> On Fri, Jun 9, 2017 at 9:50 AM Simon Peyton Jones 
wrote:
> > Maybe this is the core of our disagreement - why is it a good idea to
have Hoopl as a separate package in the first place?
>
>
> One reason only: because it makes Hoopl usable by compilers other than
GHC.  And, dually, efforts by others to improve Hoopl will benefit GHC.
>
> > If I proposed extracting parts of Core optimizer to a separate package,
wouldn't you expect some really good reasons for doing this?
>
>
> A re-usable library should be
> a)  a significant chunk of code,
> b)  that can plausibly be re-purposed by others
> c)  and that has an explicable API
>
> I think the Core optimiser is so big, and so GHC specific, that (b) and
(c) are unlikely to hold.  But we carefully designed Hoopl from the ground
up so that it was agnostic about the node types, and so can be re-used for
control flow graphs of many kinds.  It’s designed to be re-usable.  Whether
it is actually re-used is another matter, of course.  But if it’s part of
GHC, it can’t be.

I agree with your characterization of a re-usable library and that
Core optimizer would not be a good fit. But I do think that Hoopl also
has some problems with b) and c) (although smaller):
- Using an optimizer-as-a-library is not really common (I'm not aware
  of any compilers doing this, LLVM is to some degree close but it
  exposes the whole language as the interface so it's closer to the
  idea of extracting the whole Cmm backend). So I don't think the API
  for such a project is well understood.
- The API is pretty wide and does put serious constraints on the IR
  (after all it defines blocks and graphs), making reusability
  potentially more tricky.

So I think I understand your argument and we just disagree on whether
this is worth the effort of having a separate package.

>
> [...]
>
> > I've pointed multiple reasons why I think it has a significant cost.
>
> Can you just summarise them again briefly for me?  If we are free to
choose nomenclature and API for hoopl2, I’m not yet seeing why making it a
separate package is harder than not doing so. E.g. template-haskell is a
separate package.

Having even Hoopl2 as a separate package would still entail
additional work:
- Hoopl2 would still need to duplicate some concepts (eg, `Unique`,
  etc. since it needs to be standalone)
- Understanding code (esp. by newcommers) would be harder: the Cmm
  backend would be split between GHC and Hoopl2, with the latter
  necessarily being far more general/polymorphic than needed by GHC.
- Getting the right performance in the presence of all this additional
  generality/polymorphism will likely require fair amount of
  additional work.
- If Hoopl2 is used by other compilers, then we need to be more
  careful changing anything in incompatible ways, this will require
  more discussions & release coordination.

Considering that Hoopl was never actually picked up by other
compilers, I'm not convinced that this cost is justified. But I
understand that other people might have a different opinion.
So how about a compromise:
- decouple GHC from the current Hoopl (ie, go ahead with my diff),
- keep everything Hoopl related only in `compiler/cmm/Hoopl` with the
  long-term intention of creating a separate package,
- experiment with and improve the code,
- once (if?) we're happy with the results, discuss what/how to
  extract to a separate package.
That gives us the freedom to try things out and see what works well
(I simply don't have ready solutions for anything, being able to
experiment is IMHO quite important). And once we reach the right
performance/representation/abstraction/API we can work on extracting
that.

What do you think?

Cheers,
Michal
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Removing Hoopl dependency?

2017-06-08 Thread Michal Terepeta
> On Wed, Jun 7, 2017 at 7:05 PM Simon Peyton Jones 
wrote:
> Michael
>
> Sorry to be slow.
>
> > Note that what I’m actually advocating is to *finish* forking Hoopl. The
> > fork really started in ~2012 when the “new Cmm backend” was being
> > finished.
>
> Yes, I know.  But what I’m suggesting is to revisit the reasons for that
fork, and re-join if possible.  Eg if Hoopl is too slow, can’t we make it
faster?  Why is GHC’s version faster?
>
> > apart from the performance
> > (as noted above), there’s the issue of Hoopl’s interface. IMHO the
> > node-oriented approach taken by Hoopl is both not flexible enough and it
> > makes it harder to optimize it. That’s why I’ve already changed GHC’s
> > `Hoopl.Dataflow` module to operate “block-at-a-time”
>
> Well that sounds like an argument to re-engineer Hoopl’s API, rather an
argument to fork it.  If it’s a better API, can’t we make it better for
everyone?  I don’t yet understand what the “block-oriented” API is, or how
it differs, but let’s have the conversation.

Sure, but re-engineering the API of a publicly use package has significant
cost for everyone involved:
- GHC: we might need to wait longer for any improvements and spend
  more time discussing various options (and compromises - what makes
  sense for GHC might not make sense for other people)
- Hoopl users: will need to migrate to the new APIs potentially
  multiple times
- Hoopl maintainers: might need to maintain more than one branches of
  Hoopl for a while

And note that just bumping a version number might not be enough.  IIRC
Stackage only allows one version of each package and since Hoopl is a
boot package for GHC, the new version will move to Stackage along with
GHC. So any users of Hoopl that want to use the old package, will not
be able to use that version of Stackage.

> > When you say
> > that we should “just fix Hoopl”, it sounds to me that we’d really need
> > to rewrite it from scratch. And it’s much easier to do that if we can
> > just experiment within GHC without worrying about breaking other
> > existing Hoopl users
>
> Fine.  But then let’s call it hoopl2, make it a separate package (perhaps
with GHC as its only client for now), and declare that it’s intended to
supersede hoopl.

Maybe this is the core of our disagreement - why is it a good idea to
have Hoopl as a separate package in the first place?

I've pointed multiple reasons why I think it has a significant cost.
But I don't really see any major benefits. Looking at the commit
history of Hoopl there hasn't been much development on it since 2012
when Simon M was trying to get the new GHC backend working (since
then, it's mostly maintenance patches to keep up with changes in
`base`, etc).
Extracting a core part of any project to a shared library has some
real costs, so there should be equally real benefits that outweigh
that cost. (If I proposed extracting parts of Core optimizer to a
separate package, wouldn't you expect some really good reasons for
doing this?)
I also do think this is quite different than a dependency on, say,
`binary`, `containers` or `pretty`, where the API of the library is
smaller (at least conceptually), much better understood and
established.

Cheers,
Michal
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Removing Hoopl dependency?

2017-05-28 Thread Michal Terepeta
Cool, thanks for quick replies!
I've sent out https://phabricator.haskell.org/D3616

Cheers,
Michal
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: FYI: removing `fibon`

2017-03-22 Thread Michal Terepeta
Ok, thanks Gracjan!

Ben, could I ask you to pull from:
https://github.com/michalt/nofib/tree/fibon
(https://github.com/michalt/nofib.git branch `fibon`)
Or if you prefer Phab, let me know if there's some magic incantation
to make it work with this patch (`arc` currently crashes for me)

Thanks,
Michal

On Tue, Mar 14, 2017 at 9:32 PM Gracjan Polak <gracjanpo...@gmail.com>
wrote:

> I'm not working on it and do not plan to start again.
>
> Looks like fibon never worked and wasn't used for anything, so it should
> be removed. Still it would make sense to replace this code with something
> used as part of normal nofib test cases.
>
> 2017-03-14 19:59 GMT+01:00 Michal Terepeta <michal.terep...@gmail.com>:
>
> Hi all,
>
> I wanted to remove `fibon` (from `nofib`) - it's broken, abandoned
> upstream (no commits in 5 years) and I'm not aware of anyone using it
> or working on it. At this point I don't think it makes sense to try to
> revive it - I'd prefer putting the time/effort into getting a few new
> benchmarks.
>
> There were already discussions about removing it in
> https://ghc.haskell.org/trac/ghc/ticket/11501
>
> If someone is actually working on getting it to work again, please
> shout!
>
> Thanks,
> Michal
>
> PS. I've tried uploading the patch to Phab, but I think it's just too
> large (arc is crashing). So I've uploaded it to github:
> https://github.com/michalt/nofib/tree/fibon
>
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Stat too good

2017-03-18 Thread Michal Terepeta
Just FYI: I'm on 64-bit Linux and don't see those failures (I just
validated at 763f43e6d3)

Cheers,
Michal

On Fri, Mar 17, 2017 at 6:49 PM Simon Peyton Jones via ghc-devs <
ghc-devs@haskell.org> wrote:

> Ben,
>
> I still get these four stat-too-good “failures” on 64-bit Linux.
>
> Unexpected stat failures:
>
>/tmp/ghctest-ca0gfq/test   spaces/./perf/compiler/T13035.run  T13035
> [stat too good] (normal)
>
>/tmp/ghctest-ca0gfq/test   spaces/./perf/compiler/T12425.run  T12425
> [stat too good] (optasm)
>
>/tmp/ghctest-ca0gfq/test   spaces/./perf/compiler/T1969.run   T1969
> [stat too good] (normal)
>
>/tmp/ghctest-ca0gfq/test   spaces/./perf/compiler/T9233.run   T9233
> [stat too good] (normal)
>
> Don’t you?
>
> Simon
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


FYI: removing `fibon`

2017-03-14 Thread Michal Terepeta
Hi all,

I wanted to remove `fibon` (from `nofib`) - it's broken, abandoned
upstream (no commits in 5 years) and I'm not aware of anyone using it
or working on it. At this point I don't think it makes sense to try to
revive it - I'd prefer putting the time/effort into getting a few new
benchmarks.

There were already discussions about removing it in
https://ghc.haskell.org/trac/ghc/ticket/11501

If someone is actually working on getting it to work again, please
shout!

Thanks,
Michal

PS. I've tried uploading the patch to Phab, but I think it's just too
large (arc is crashing). So I've uploaded it to github:
https://github.com/michalt/nofib/tree/fibon
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: nofib on Shake

2017-01-10 Thread Michal Terepeta
On Tue, Jan 10, 2017 at 11:35 AM Gracjan Polak 
wrote:
> I was looking nearby recently and you might want to take into account my
> discoveries described in https://ghc.haskell.org/trac/ghc/ticket/11501

Thanks a lot for mentioning it! (I didn't see this ticket/discussion)

I don't want to get in your way - did you already start working on
something? Do you have some concrete plans wrt. nofib?

>From my side, I was recently mostly interested in using nofib to
measure the performance of GHC itself. Nofib already tries to do that,
but it's super flaky (it only compiles things once and most modules
are small).  So I was thinking of improving this, but when I started
to look into it a bit closer, I decided that it might be better to
start with the build system ;) And then add options to compile things
more than once, add some compile-time only benchmarks, etc.

Thanks,
Michal
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: nofib on Shake

2017-01-09 Thread Michal Terepeta
On Sun, Jan 8, 2017 at 10:56 PM Joachim Breitner 
wrote:
> Hi,
>
> Am Sonntag, den 08.01.2017, 13:45 -0500 schrieb Ben Gamari:
> > > We could also create a cabal and stack files for `nofib-analyse`
(making
> > > it possible to use some libraries for it).
> > >
> > This would be great. This would allow me to drop a submodule from my own
> > performance monitoring tool.
>
> Exists since last April:
> http://hackage.haskell.org/package/nofib-analyse
>
> Only the binary so far, though, but good enough for
> "cabal install nofib-analyse".

Oh, interesting! But now I'm a bit confused - what's the relationship
between https://github.com/nomeata/nofib-analyse and
https://git.haskell.org/nofib.git, e.g., is the github repo the
upstream for nofib-anaylse and the haskell.org one for the other parts
of nofib? Or is the github one just a mirror and all patches should go
to haskell.org repo?

Thanks,
Michal
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: nofib on Shake

2017-01-09 Thread Michal Terepeta
> On Sun, Jan 8, 2017 at 7:45 PM Ben Gamari <b...@well-typed.com> wrote:
> Michal Terepeta <michal.terep...@gmail.com> writes:
>
> > Hi all,
> >
> > While looking at nofib, I've found a blog post from Neil Mitchell [1],
> > which describes a Shake build system for nofib. The comments mentioned
> > that this should get merged, but it seems that nothing actually
happened?
> > Is there some fundamental reason for that?
> >
> Indeed there is no fundamental reason and I think it would be great to
> make nofib a bit easier to run and modify.

Ok, cool. I'll have a look at using Neil's code and see if it needs
any updating or if something is missing.

> However, I think we should be careful to maintain some degree of
> compatibility. One of the nice properties of nofib is that it can be run
> against a wide range of compiler versions. It would be ashame if, for
> instance, Joachim's gipeda had to do different things to extract
> performance metrics from logs produced by logs pre- and post-Shake
> nofibs.

Thanks for mentioning this! I don't have any concrete plans to change
that at the moment, but I was thinking that in the future it'd be nice
if the results were, e.g., a simple csv file, instead of a log
containing all the stdout/stderr (i.e., it currently contains the
results, any warnings from GHC, output from `Debug.Trace.trace`,
etc.)
Anyway, that's probably further down the road, so before doing
anything, I'll likely send an email to ghc-devs so that we can discuss
this.


Cheers,
Michal
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


nofib on Shake

2017-01-08 Thread Michal Terepeta
Hi all,

While looking at nofib, I've found a blog post from Neil Mitchell [1],
which describes a Shake build system for nofib. The comments mentioned
that this should get merged, but it seems that nothing actually happened?
Is there some fundamental reason for that?

If not, I'd be interested picking this up - the current make-based
system is pretty confusing for me and `runstdtest` looks simply
terrifying ;-)
We could also create a cabal and stack files for `nofib-analyse` (making
it possible to use some libraries for it).

Thanks,
Michal

[1]
http://neilmitchell.blogspot.ch/2013/02/a-nofib-build-system-using-shake.html
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Measuring performance of GHC

2016-12-07 Thread Michal Terepeta
On Tue, Dec 6, 2016 at 10:10 PM Ben Gamari  wrote:
> [...]
> > How should we proceed? Should I open a new ticket focused on this?
> > (maybe we could try to figure out all the details there?)
> >
> That sounds good to me.

Cool, opened: https://ghc.haskell.org/trac/ghc/ticket/12941 to track
this.

Cheers,
Michal
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Measuring performance of GHC

2016-12-06 Thread Michal Terepeta
> On Tue, Dec 6, 2016 at 2:44 AM Ben Gamari <b...@smart-cactus.org> wrote:
> Michal Terepeta <michal.terep...@gmail.com> writes:
>
> [...]
>>
>> Looking at the comments on the proposal from Moritz, most people would
>> prefer to
>> extend/improve nofib or `tests/perf/compiler` tests. So I guess the main
>> question is - what would be better:
>> - Extending nofib with modules that are compile only (i.e., not
>>   runnable) and focus on stressing the compiler?
>> - Extending `tests/perf/compiler` with ability to run all the tests and
do
>>   easy "before and after" comparisons?
>>
>I don't have a strong opinion on which of these would be better.
>However, I would point out that currently the tests/perf/compiler tests
>are extremely labor-intensive to maintain while doing relatively little
>to catch performance regressions. There are a few issues here:
>
> * some tests aren't very reproducible between runs, meaning that
>   contributors sometimes don't catch regressions in their local
>   validations
> * many tests aren't very reproducible between platforms and all tests
>   are inconsistent between differing word sizes. This means that we end
>   up having many sets of expected performance numbers in the testsuite.
>   In practice nearly all of these except 64-bit Linux are out-of-date.
> * our window-based acceptance criterion for performance metrics doesn't
>   catch most regressions, which typically bump allocations by a couple
>   percent or less (whereas the acceptance thresholds range from 5% to
>   20%). This means that the testsuite fails to catch many deltas, only
>   failing when some unlucky person finally pushes the number over the
>   threshold.
>
> Joachim and I discussed this issue a few months ago at Hac Phi; he had
> an interesting approach to tracking expected performance numbers which
> may both alleviate these issues and reduce the maintenance burden that
> the tests pose. I wrote down some terse notes in #12758.

Thanks for mentioning the ticket!

To be honest, I'm not a huge fan of having performance tests being treated
the
same as any other tests. IMHO they are quite different:

- They usually need a quiet environment (e.g., cannot run two different
tests at
  the same time). But with ordinary correctness tests, I can run as many as
I
  want concurrently.

- The output is not really binary (correct vs incorrect) but some kind of a
  number (or collection of numbers) that we want to track over time.

- The decision whether to fail is harder. Since output might be noisy, you
  need to have either quite relaxed bounds (and miss small regressions) or
  try to enforce stronger bounds (and suffer from the flakiness and
maintenance
  overhead).

So for the purpose of:
  "I have a small change and want to check its effect on compiler
performance
  and expect, e.g., ~1% difference"
the model running of benchmarks separately from tests is much nicer. I can
run
them when I'm not doing anything else on the computer and then easily
compare
the results. (that's what I usually do for nofib). For tracking the
performance
over time, one could set something up to run the benchmarks when idle.
(isn't
that's what perf.haskell.org is doing?)

Due to that, if we want to extend tests/perf/compiler to support this use
case,
I think we should include there benchmarks that are *not* tests (and are not
included in ./validate), but there's some easy tool to run all of them and
give
you a quick comparison of what's changed.

To a certain degree this would be then orthogonal to the improvements
suggested
in the ticket. But we could probably reuse some things (e.g., dumping .csv
files
for perf metrics?)

How should we proceed? Should I open a new ticket focused on this? (maybe we
could try to figure out all the details there?)

Thanks,
Michal
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Measuring performance of GHC

2016-12-05 Thread Michal Terepeta
On Mon, Dec 5, 2016 at 12:00 PM Moritz Angermann 
wrote:

> Hi,
>
> I’ve started the GHC Performance Regression Collection Proposal[1]
> (Rendered [2])
> a while ago with the idea of having a trivially community curated set of
> small[3]
> real-world examples with performance regressions. I might be at fault here
> for
> not describing this to the best of my abilities. Thus if there is
> interested, and
> this sounds like an useful idea, maybe we should still pursue this
> proposal?
>
> Cheers,
>  moritz
>
> [1]: https://github.com/ghc-proposals/ghc-proposals/pull/26
> [2]:
> https://github.com/angerman/ghc-proposals/blob/prop/perf-regression/proposals/-perf-regression.rst
> [3]: for some definition of small
>

Interesting! I must have missed this proposal.  It seems that it didn't meet
with much enthusiasm though (but it also proposes to have a completely
separate
repo on github).

Personally, I'd be happy with something more modest:
- A collection of modules/programs that are more representative of real
Haskell
  programs and stress various aspects of the compiler.
  (this seems to be a weakness of nofib, where >90% of modules compile in
less
  than 0.4s)
- A way to compile all of those and do "before and after" comparisons
easily. To
  measure the time, we should probably try to compile each module at least
a few
  times.
  (it seems that this is not currently possible with `tests/perf/compiler`
and
  nofib only compiles the programs once AFAICS)

Looking at the comments on the proposal from Moritz, most people would
prefer to
extend/improve nofib or `tests/perf/compiler` tests. So I guess the main
question is - what would be better:
- Extending nofib with modules that are compile only (i.e., not runnable)
and
  focus on stressing the compiler?
- Extending `tests/perf/compiler` with ability to run all the tests and do
  easy "before and after" comparisons?

Personally, I'm slightly leaning towards `tests/perf/compiler` since this
would
allow sharing the same module as a test for `validate` and to be used for
comparing the performance of the compiler before and after a change.

What do you think?

Thanks,
Michal
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Measuring performance of GHC

2016-12-04 Thread Michal Terepeta
Hi everyone,

I've been running nofib a few times recently to see the effect of some
changes
on compile time (not the runtime of the compiled program). And I've started
wondering how representative nofib is when it comes to measuring compile
time
and compiler allocations? It seems that most of the nofib programs compile
really quickly...

Is there some collections of modules/libraries/applications that were put
together with the purpose of benchmarking GHC itself and I just haven't
seen/found it?

If not, maybe we should create something? IMHO it sounds reasonable to have
separate benchmarks for:
- Performance of GHC itself.
- Performance of the code generated by GHC.

Thanks,
Michal
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Status of "Improved LLVM backend"

2016-11-29 Thread Michal Terepeta
On Mon, Nov 28, 2016 at 2:43 AM Moritz Angermann 
wrote:
[...]
> For the llvm code gen in ghc it’s usually the `_fast` suffix functions.
See [1] and
> the `genStore_fast` 30 lines further down.  My bitcode llvm gen follows
that file [1],
> almost identically, as can be seen in [2].  However the `_fast` path is
currently
> disabled.
>
> An example of the generated ir for the current llvm backend, and the
bitcode backend,
> (textual ir, via llvm-dis) can be found in [3] and [4] respectively.

Cool, thanks a lot for the links!

> > > I don’t know if generating llvm from stg instead of cmm would be a
better
> > > approach, which is what ghcjs and eta do as far as I know.
> >
> > Wouldn't a step from STG to LLVM be much harder (LLVM IR is a pretty
low-level
> > representation compared to STG)? There are also a few passes on the Cmm
level
> > that seem necessary, e.g., `cmmLayoutStack`.

> There is certainly a tradeoff between retaining more high-level
information and
> having to lower them oneself.  If I remember luite correctly, he said he
had a similar
> intermediate format to cmm, just not cmm but something richer, which
allows
> to better target javascript.  The question basically boils down to asking
if cmm is
> too low-level for llvm already; the embedding of wordsizes is an example
where I think
> cmm might be to low-level for llvm.

Ok, I see. This is quite interesting - I'm wondering if it makes sense to
collect thought/ideas like that somewhere (e.g., a wiki page with all the
issues
of using current Cmm for LLVM backend, or just adding some comments in the
code).

Thanks,
Michal
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Status of "Improved LLVM backend"

2016-11-27 Thread Michal Terepeta
> Hi,
>
> I’m trying to implement a bitcode producing llvm backend[1], which would
potentially
> allow to use a range of llvm versions with ghc. However, this is only
tangentially
> relevant to the improved llvm backend, as Austin correctly pointed
out[2], as there are
> other complications besides the fragility of the textual representation.
>
> So this is mostly only relevant to the improved ir you mentioned. The
bitcode code gen
> plugin right now follows mostly the textual ir generation, but tries to
prevent the
> ubiquitous symbol to i8* casting. The llvm gen turns cmm into ir, at this
point however
> at that point, the wordsize has been embedded already, which means that
the current
> textual llvm gen as well as the bitcode llvm gen try to figure out if
relative access is
> in multiple wordsizes to use llvms getElementPointer.

That sounds interesting, do you know where could I find out more about this?
(both when it comes to the current LLVM codegen and yours)

> I don’t know if generating llvm from stg instead of cmm would be a better
> approach, which is what ghcjs and eta do as far as I know.

Wouldn't a step from STG to LLVM be much harder (LLVM IR is a pretty
low-level
representation compared to STG)? There are also a few passes on the Cmm
level
that seem necessary, e.g., `cmmLayoutStack`.

Cheers,
Michal
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Status of "Improved LLVM backend"

2016-11-26 Thread Michal Terepeta
Hi all,

I was wondering what’s the current status of the “Improved LLVM backend”
project
( https://ghc.haskell.org/trac/ghc/wiki/ImprovedLLVMBackend ). The page
mentions a
few main problems, but some seem to be already fixed:
1) Using/supporting only one version of LLVM.
   This has been done AFAIK.
2) Prebuilt binaries to be shipped together with GHC.
   I can't find anything about this. Is there a ticket? Has there been any
   progress on this?
3) Adding support for split-objs
   I found a ticket about it: https://ghc.haskell.org/trac/ghc/ticket/8300
   which was closed as WONTFIX in favor of split-sections. So I guess this
can
   also be considered as done.
4) Figuring out what LLVM optimizations are useful.
   Again, I can seem to find anything here. Has anyone looked at this?
   I only found an issue about this:
   https://ghc.haskell.org/trac/ghc/ticket/11295

The page also mentions that the generated IR could be improved in many
cases,
but it doesn't link to any tickets or discussions. Is there something I
could
read to better understand what are the main problems?
The only thing I can recall is that proc point splitting is likely to cause
issues for LLVM's ability to optimize the code. (but I only found a couple
of email
threads about this but couldn't find any follow-ups)

Thanks,
Michal
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Dataflow analysis for Cmm

2016-10-24 Thread Michal Terepeta
On Fri, Oct 21, 2016 at 4:04 PM Simon Marlow <marlo...@gmail.com> wrote:

> On 16 October 2016 at 14:03, Michal Terepeta <michal.terep...@gmail.com>
> wrote:
>
> Hi,
>
> I was looking at cleaning up a bit the situation with dataflow analysis
> for Cmm.
> In particular, I was experimenting with rewriting the current
> `cmm.Hoopl.Dataflow` module:
> - To only include the functionality to do analysis (since GHC doesn’t seem
> to use
>   the rewriting part).
>   Benefits:
>   - Code simplification (we could remove a lot of unused code).
>   - Makes it clear what we’re actually using from Hoopl.
> - To have an interface that works with transfer functions operating on a
> whole
>   basic block (`Block CmmNode C C`).
>   This means that it would be up to the user of the algorithm to traverse
> the
>   whole block.
>
>
> Ah! This is actually something I wanted to do but didn't get around to.
> When I was working on the code generator I found that using Hoopl for
> rewriting was prohibitively slow, which is why we're not using it for
> anything right now, but I think that pulling out the basic block
> transformation is possibly a way forwards that would let us use Hoopl.
>

Right, I've also seen:
https://plus.google.com/107890464054636586545/posts/dBbewpRfw6R
https://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler/HooplPerformance
but it seems that there weren't any follow-ups/conclusions on that.
Also, I haven't started writing anything for the rewriting yet (only
analysis for
now).

Btw. I'm currently experimenting with the GHC's fork of Dataflow module -
and for now I'm not planning on pushing the changes to the upstream Hoopl.
There are already projects that depend on the current interface of Hoopl
(it's
on Hackage after all) and it's going to be hard to make certain changes
there.
Hope that's ok with everyone!
(also, we can always revisit this question later)

A lot of the code you're removing is my attempt at "optimising" the Hoopl
> dataflow algorithm to make it usable in GHC.  (I don't mind removing this,
> it was a failed experiment really)
>

Thanks for saying that!


>   Benefits:
>   - Further simplifications.
>   - We could remove `analyzeFwdBlocks` hack, which AFAICS is just a
> copy
> of `analyzeFwd` but ignores the middle nodes (probably for efficiency
> of
> analyses that only look at the blocks).
>
>
> Aren't we using this in dataflowAnalFwdBlocks, that's used by
> procpointAnalysis?
>

Yes, sorry for confusion! What I meant is that
analyzeFwdBlocks/dataflowAnalFwdBlocks is currently a special case of
analyzeFwd/dataflowAnalFwd that only looks at first and last nodes. So if we
move to block-oriented interface, it simply stops being a special case and
fits
the new interface (since it's the analysis that decides whether to look at
the
whole block or only parts of it). So it's removed in the sense of "removing
a
special case".

Cheers,
Michal
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Dataflow analysis for Cmm

2016-10-17 Thread Michal Terepeta
On Mon, Oct 17, 2016 at 10:57 AM Jan Stolarek 
wrote:

> Michał,
>
> Dataflow module could indeed use cleanup. I have made two attempts at this
> in the past but I don't
> think any of them was merged - see [1] and [2]. [2] was mostly
> type-directed simplifications. It
> would be nice to have this included in one form or another. It sounds like
> you also have a more
> in-depth refactoring in mind. Personally as long as it is semantically
> correct I think it will be
> a good thing. I would especially support removing dead code that we don't
> really use.
>
> [1] https://github.com/jstolarek/ghc/commits/js-hoopl-cleanup-v2
> [2] https://github.com/jstolarek/ghc/commits/js-hoopl-cleanup-v2


Ok, I'll have a look at this!
(did you intend to send two identical links?)

> Second question: how could we merge this? (...)
> I'm not sure if I understand. The end result after merging will be exactly
> the same, right? Are
> you asking for advice what is the best way of doing this from a technical
> point if view? I would
> simply edit the existing module. Introducing a temporary second module
> seems like unnecessary
> extra work and perhaps complicating the patch review.
>

Yes, the end result would be the same - I'm merely asking what would be
preferred by GHC devs (i.e., I don't know how fine grained patches to GHC
usually are).


> > I’m happy to export the code to Phab if you prefer - I wasn’t sure what’s
> > the recommended workflow for code that’s not ready for review…
> This is OK but please remember to set status of revision to "Planned
> changes" after uploading it
> to Phab so it doesn't sit in reviewing queue.
>

Cool, I didn't know about the "Planned changes" status.
Thanks for mentioning it!

Cheers,
Michal
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Dataflow analysis for Cmm

2016-10-16 Thread Michal Terepeta
Hi,

I was looking at cleaning up a bit the situation with dataflow analysis for
Cmm.
In particular, I was experimenting with rewriting the current
`cmm.Hoopl.Dataflow` module:
- To only include the functionality to do analysis (since GHC doesn’t seem
to use
  the rewriting part).
  Benefits:
  - Code simplification (we could remove a lot of unused code).
  - Makes it clear what we’re actually using from Hoopl.
- To have an interface that works with transfer functions operating on a
whole
  basic block (`Block CmmNode C C`).
  This means that it would be up to the user of the algorithm to traverse
the
  whole block.
  Benefits:
  - Further simplifications.
  - We could remove `analyzeFwdBlocks` hack, which AFAICS is just a
copy
of `analyzeFwd` but ignores the middle nodes (probably for efficiency of
analyses that only look at the blocks).
  - More flexible (e.g., the clients could know which block they’re
processing;
we could consider memoizing some per block information, etc.).

What do you think about this?

I have a branch that implements the above:
https://github.com/michalt/ghc/tree/dataflow2/1
It’s introducing a second parallel implementation (`cmm.Hoopl.Dataflow2`
module), so that it's possible to run ./validate while comparing the
results of
the old implementation with the new one.

Second question: how could we merge this? (assuming that people are
generally
ok with the approach) Some ideas:
- Change cmm/Hoopl/Dataflow module itself along with the three analyses
that use
  it in one step.
- Introduce the Dataflow2 module first, then switch the analyses, then
remove
  any unused code that still depends on the old Dataflow module, finally
remove
  the old Dataflow module itself.
(Personally I'd prefer the second option, but I'm also ok with the first
one)

I’m happy to export the code to Phab if you prefer - I wasn’t sure what’s
the
recommended workflow for code that’s not ready for review…

Thanks,
Michal
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Hoopl question

2015-05-03 Thread Michal Terepeta

  On Sun, May 3, 2015 at 7:39 PM Jan Stolarek jan.stola...@p.lodz.pl
 wrote:
  Michał,
 
  one of my students is currently working on this:
 
  https://ghc.haskell.org/trac/ghc/wiki/Hoopl/Cleanup
 
  as his BSc thesis (see #8315). It might turn out that he will also have
 enough time to focus on
  performance issues in Hoopl but at this point it is hard to tell.
 
  Janek


Sounds great! :-)

Which reminds me about another question I had -- the main reason to have the
specialized module in GHC (instead of relying on the Hoopl one) is
performance,
right? (as in, the module is specialized for the UniqSM, but otherwise
pretty
close to Hoopl.Dataflow?)

Thanks,
Michal
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Hoopl question

2015-05-02 Thread Michal Terepeta
Hi,

I've just read through the Hoopl paper and then noticed
https://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler/HooplPerformance
which is really interesting. But it seems that there were no updates to the
page
in like 3 years, yet the new codegen seems to be using Hoopl... Does anyone
know
what is the current status of this?

Thanks,
Michal
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs