Re: GitLab forks and submodules

2019-01-08 Thread Ömer Sinan Ağacan
> As I mention in the documentation, those with commits bits should feel
> free to push branches to ghc/ghc.

This is sometimes not ideal as it wastes GHC's CI resources. For example I make
a lot of WIP commits to my work branches, and I don't want to keep CI machines
busy for those.

Ömer

Ben Gamari , 8 Oca 2019 Sal, 04:53 tarihinde şunu yazdı:
>
> Moritz Angermann  writes:
>
> > Can’t we have absolute submodule paths? Wouldn’t that elevate the
> > issue?
> >
> Perhaps; I mentioned this possibility in my earlier response. It's not
> clear which trade-off is better overall, however.
>
> > When we all had branches on ghc/ghc this
> > was not an issue.
> >
> As I mention in the documentation, those with commits bits should feel
> free to push branches to ghc/ghc.
>
> Cheers,
>
> - Ben
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: GitLab forks and submodules

2019-01-07 Thread Ömer Sinan Ağacan
 while making the case of contributing patches with submodule changes more
 difficult

I don't understand this, can you give an example of what absolute paths make
harder?

Looking at the wiki pages and scripts we need to make relative paths work for
everyone, I think it's clear that absolute paths would be better because CI
wouldn't need any scripts anymore and users would need no instructions to make
cloning forks work.

Ömer

Ben Gamari , 8 Oca 2019 Sal, 04:53 tarihinde şunu yazdı:
>
> Moritz Angermann  writes:
>
> > Can’t we have absolute submodule paths? Wouldn’t that elevate the
> > issue?
> >
> Perhaps; I mentioned this possibility in my earlier response. It's not
> clear which trade-off is better overall, however.
>
> > When we all had branches on ghc/ghc this
> > was not an issue.
> >
> As I mention in the documentation, those with commits bits should feel
> free to push branches to ghc/ghc.
>
> Cheers,
>
> - Ben
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: [GHC DevOps Group] Welcome to GitLab!

2019-01-07 Thread Ömer Sinan Ağacan
> submodules are still pulling from git.haskell.org, is there an easy way to 
> fix that?

`git submodule sync` should fix that.

Ömer

Simon Marlow , 7 Oca 2019 Pzt, 11:42 tarihinde şunu yazdı:
>
> Congrats Ben and co! This is a huge step forwards.
>
> On Thu, 27 Dec 2018 at 06:27, Ben Gamari  wrote:
>>
>>
>> git remote set-url origin https://gitlab.haskell.org/ghc/ghc.git
>> git remote set-url --push origin g...@gitlab.haskell.org:ghc/ghc
>>
>> This is all that should be necessary; a quick `git pull origin master`
>> should verify that everything is working as expected.
>
>
> submodules are still pulling from git.haskell.org, is there an easy way to 
> fix that?
>
> Cheers
> Simon
> ___
> Ghc-devops-group mailing list
> ghc-devops-gr...@haskell.org
> https://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devops-group
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Building dictionary terms in Core?

2019-01-06 Thread Ömer Sinan Ağacan
Hi,

In #15646 (recent discussion in Gitlab MR 55) we need dictionary arguments in
Core (in desugarer) to apply to functions like `fromRational :: Fractional a =>
Rational -> a`, but we don't know how to build the dictionary term (`Fractional
a`) in Core. Can anyone who know help us in the MR?

Thanks,

Ömer
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Better DWARF info for Cmm procedures?

2019-01-06 Thread Ömer Sinan Ağacan
> However, there is also a slightly more fundamental issue here: GHC's NCG
> handles DWARF information with block granularity. Fixing this will be a
> bit more involved. See compiler/nativeGen/Dwarf.hs for details.
>
> One alternative would be to just finish debug information in the LLVM
> backend and use this instead (originally D2343, although mpickering has
> a newer version).

But LLVM backend also uses the same debug info we generate for NCG, no? So I
think debug info would still be in block granularity?

How hard do you think it would be to do the refactoring to generate debug info
for each Cmm source line, instead of each RawCmm block?

Ömer

Ben Gamari , 6 Oca 2019 Paz, 14:47 tarihinde şunu yazdı:
>
> Ömer Sinan Ağacan  writes:
>
> > Hi,
> >
> > Currently debugging Cmm is a bit painful because we don't have enough debug
> > information to map assembly to Cmm lines, so I have do the mapping manually.
> > However I realized that when building .cmm files we actually generates some
> > debug information, in form of "ticks":
> >
> > //tick src
> > _c2e::I64 = I64[R1 + 32];
> >
> > Here the tick says that this assignment is for this Cmm line in Apply.cmm:
> >
> > Words = StgAP_STACK_size(ap);
> >
> > I was wondering what needs to be done to generate DWARF information from 
> > those
> > so that gdb can show Cmm line we're executing, and gdb commands like `next`,
> > `break` etc. work.
> >
> The DWARF information that we produce are indeed derived from these
> source notes. If you compile a C-- module with -g3 you will find the
> resulting object file should have line number information.
>
> > I also realize that we don't consistently generate these ticks for all Cmm
> > lines, for example, in the same Cmm dump there isn't a tick before this 
> > line:
> >
> Indeed the C-- parser doesn't produce as many source notes
> as you might find in C-- from the STG pipeline. Essentially it only adds
> source notes on flow control constructs and assignments (see uses of
> withSourceNote in CmmParse.y).
>
> However, there is also a slightly more fundamental issue here: GHC's NCG
> handles DWARF information with block granularity. Fixing this will be a
> bit more involved. See compiler/nativeGen/Dwarf.hs for details.
>
> One alternative would be to just finish debug information in the LLVM
> backend and use this instead (originally D2343, although mpickering has
> a newer version).
>
> Cheers,
>
> - Ben
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Better DWARF info for Cmm procedures?

2019-01-05 Thread Ömer Sinan Ağacan
Hi,

Currently debugging Cmm is a bit painful because we don't have enough debug
information to map assembly to Cmm lines, so I have do the mapping manually.
However I realized that when building .cmm files we actually generates some
debug information, in form of "ticks":

//tick src
_c2e::I64 = I64[R1 + 32];

Here the tick says that this assignment is for this Cmm line in Apply.cmm:

Words = StgAP_STACK_size(ap);

I was wondering what needs to be done to generate DWARF information from those
so that gdb can show Cmm line we're executing, and gdb commands like `next`,
`break` etc. work.

I also realize that we don't consistently generate these ticks for all Cmm
lines, for example, in the same Cmm dump there isn't a tick before this line:

(_c2j::I64) = call MO_Cmpxchg W64(R1, stg_AP_STACK_info,
stg_WHITEHOLE_info);

It's actually for Apply.cmm:646.

So there are two problems:

- Generate ticks for _all_ Cmm lines
- Generate DWARF information from those so that gdb can show current Cmm line,
  and commands like `next` and `break` work.

Anyone know how hard would this be to implement? I'm wondering if we could turn
this into a SoC project. It's a very well defined task, and given that we have
some DWARF support already, perhaps it's not too hard for a SoC.

Ömer
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Residency profiles

2018-12-06 Thread Ömer Sinan Ağacan

Hi,

> I think what we want is a way to trigger GC at very regular intervals, after
> (say) each 10kbytes or 100kbytes or 1Mbyte  of allocation.  That might be
> expensive, but we’d get reproducible results.

If we could fix the nursery size to 10kb that'd trigger a GC in every 10kb of
allocation (you could still allocate large objects as those are not allocated in
the nursery, but perhaps that's not a problem in your benchmarks). Then by
setting -G1 you could turn all GCs to major GCs (because first generation is
always collected). Note that because each capability has its own nursery you may
want to set the nursery size to alloc_per_gc / num_of_caps if you need more than
one capability.

> I don’t think that is possible right now – see the ticket – but it would be
> easy enough to do wouldn’t it?  Just give only 10k or 100k or 1M to the
> allocator when setting it running again.

Right. A parameter for fixing the nursery size would be easy to implement, I
think. Just a new flag, then in GC.c:resize_nursery() use the flag as the
nursery size.

"Max. residency" is really hard to measure (need to do very frequent GCs),
perhaps a better question to ask is "residency when the program is in state S".
This is also hard to measure if your program is threaded or have other
non-determinism, but this lets you decide when to measure residency. Currently
we can't tell the GC to print residency stats, but perhaps we could implement a
variant of `performGC` that prints residency after the GC. So in your program
you could add `performGCPrintStats` after every iteration or step etc. Not sure
how useful this would be, but just an idea..

On 6.12.2018 13:09, Simon Peyton Jones wrote:

Simon, Ben, Omer

As you’ll see in comments 55-72 of https://ghc.haskell.org/trac/ghc/ticket/9476, 
Sebastian has been a bit flummoxed by the task of measure residency profiles; 
that is, how much data is truly live during execution.


A major GC measures that, but we are vulnerable to exactly when it happens (even 
with -G1) and that can lead to irreproducible results.


I think what we want is a way to trigger GC at very regular intervals, after 
(say) each 10kbytes or 100kbytes or 1Mbyte  of allocation.  That might be 
expensive, but we’d get reproducible results.


I don’t think that is possible right now – see the ticket – but it would be easy 
enough to do wouldn’t it?  Just give only 10k or 100k or 1M to the allocator 
when setting it running again.


Would you consider this?  Or are we just missing something obvious?

Needless to say, we want to do all this with full optimisation on, no 
cost-centre profiling.


Thanks

Simon



--
Ömer Sinan Ağacan, Haskell Consultant
Well-Typed LLP, http://www.well-typed.com

Registered in England & Wales, OC335890
118 Wymering Mansions, Wymering Road, London W9 2NF, England
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


How to distinguish local ids from imported ones in Stg? Confused about isLocalId/isLocalVar

2018-11-20 Thread Ömer Sinan Ağacan
Hi,

I just found out that semantics of isLocalId/isLocalVar change during
compilation. I realized this the hard way (after some debugging) but later
realized that this is documented in this note: (the last line)

Note [GlobalId/LocalId]
~~~
A GlobalId is
  * always a constant (top-level)
  * imported, or data constructor, or primop, or record selector
  * has a Unique that is globally unique across the whole
GHC invocation (a single invocation may compile multiple modules)
  * never treated as a candidate by the free-variable finder;
it's a constant!

A LocalId is
  * bound within an expression (lambda, case, local let(rec))
  * or defined at top level in the module being compiled
  * always treated as a candidate by the free-variable finder

After CoreTidy, top-level LocalIds are turned into GlobalIds

So after after simplification we can't distinguish a local id from an imported
one.

Apparently I'm not the only one who was confused by this. In StgLint we check
in-scope variables with this:

checkInScope :: Id -> LintM ()
checkInScope id = LintM $ \_lf loc scope errs
 -> if isLocalId id && not (id `elemVarSet` scope) then
((), addErr errs (hsep [ppr id, dcolon, ppr (idType id),
text "is out of scope"]) loc)
else
((), errs)

Note that isLocalId here returns false for local but top-level bindings. Because
of this if I drop some top-level bindings in the module I don't get a lint error
even though some ids become unbound.

I need to distinguish a top-level bound id from an imported id for two things:

- I want to make sure, in StgLint, that bindings in the Stg program are in
  dependency order (uses come after definitions). For this I need to treat
  imported ids as already bound, but for top-level bound ids I need to check if
  I already saw the definition.

- I want to run an analysis to map top-level bindings to whether they can
  contain CAF refs. For this for a top-level bound id I should check the
  environment, but for imported ids I should check idInfo.

The second analysis depends on the first property for efficiency. I could assume
that the first property holds and then always check the environment first in the
analysis, and treat ids that are not in the environment as imported, but that
seems fragile.

I'm wondering why we have to turn LocalIds into GlobalIds after simplification.
The note doesn't explain the reasoning. Would it be possible to preserve idScope
of ids during the whole compilation?

Thanks,

Ömer
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: More teststuite woes

2018-11-18 Thread Ömer Sinan Ağacan
This isprobably not the same problem, but I also started to get an error today
when running the test suite. I think the problem is with the test runner because
if I directly run the test command that the test runner prints the test works as
expected.

Error reported during validate:

Unexpected results from:
TEST="T14052"

...

Unexpected failures:
   /tmp/ghctest-_w8smutp/test   spaces/perf/should_run/T14052.run
T14052 [[Errno 2] No such file or directory:
'/tmp/ghctest-_w8smutp/test
 spaces/perf/should_run/T14052.run/T14052.stats'] (ghci)

If I run only this test I get a slightly different error:

$ make test TEST=T14052 WAY=ghci
...
Unexpected failures:
   perf/should_run/T14052.run  T14052 [[Errno 2] No such file or
directory: 'perf/should_run/T14052.run/T14052.stats'] (ghci)

(note that the file path is different)

However if I copy the command printed when running the test and run it the test
works fine.

$ HC="/home/omer/haskell/ghc/inplace/test   spaces/ghc-stage2"
HC_OPTS="-dcore-lint -dcmm-lint -no-user-package-db -rtsopts
-fno-warn-missed-specialisations -fshow-warning-groups
-fdiagnostics-color=never -fno-diagnostics-show-caret -Werror=compat
-dno-debug-output " "/home/omer/haskell/ghc/inplace/test
spaces/ghc-stage2" --interactive -v0 -ignore-dot-ghci
-fno-ghci-history +RTS -I0.1 -RTS -fghci-leak-check -dcore-lint
-dcmm-lint -no-user-package-db -rtsopts
-fno-warn-missed-specialisations -fshow-warning-groups
-fdiagnostics-color=never -fno-diagnostics-show-caret -Werror=compat
-dno-debug-output   < T14052.script > T14052.out
...
$ diff T14052.out T14052.stdout
(output matches the expected output)

Ömer

David Eichmann , 18 Kas 2018 Paz, 01:40
tarihinde şunu yazdı:
>
> The exception is thrown by python's semaphore class. We are using it to limit 
> the number of tests running concurrently, though I couldn't see anything 
> obviously wrong with the relevant code. If possible, could you find out what 
> `python --version` you have?
>
> David E
>
>
> On 17/11/18 21:55, Simon Peyton Jones wrote:
>
> Hmm.   The second run worked fine.  That’s unhelpful from a debugging point 
> of view, but it means I’m not stuck!
>
>
>
> Simon
>
>
>
> From: Simon Peyton Jones
> Sent: 17 November 2018 17:57
> To: 'David Eichmann' 
> Cc: ghc-devs@haskell.org
> Subject: RE: More teststuite woes
>
>
>
> I’ll try validating again to see if the same thing happens.
>
> Simon
>
>
>
> From: David Eichmann 
> Sent: 17 November 2018 16:48
> To: Simon Peyton Jones 
> Cc: ghc-devs@haskell.org
> Subject: Re: More teststuite woes
>
>
>
> Hello Simon,
>
> I had a quick look into this today, and spoke a bit with Ben about it. We 
> don't have a clear answer as to what is causing this at the moment. We'll 
> have to look more into this early next week.
>
> > It means I can’t validate at all.
>
> So you've tried to validate multiple times? I.e. does the error happen 
> deterministically or was it more of a one off event?
>
> - David E
>
>
>
> On 17/11/18 10:05, Simon Peyton Jones via ghc-devs wrote:
>
> David
>
> I got this error on Windows today.  It’s during the testuite run of ‘sh 
> validate’
>
> => T7969(normal) 3998 of 6647 [0, 25, 1]
>
> cd "/c/Users/simonpj/AppData/Local/Temp/ghctest-b2z6dfqg/test   
> spaces/rename/should_compile/T7969.run" && $MAKE -s --no-print-directory T7969
>
> => T9127(normal) 3999 of 6647 [0, 25, 1]
>
> cd "/c/Users/simonpj/AppData/Local/Temp/ghctest-b2z6dfqg/test   
> spaces/rename/should_compile/T9127.run" &&  "/c/code/HEAD/bindisttest/install 
>   dir/bin/ghc.exe" -c T9127.hs -dcore-lint -dcmm-lint -no-user-package-db 
> -rtsopts -fno-warn-missed-specialisations -fshow-warning-groups 
> -fdiagnostics-color=never -fno-diagnostics-show-caret -Werror=compat 
> -dno-debug-output
>
> => T4426(normal) 4000 of 6647 [0, 25, 1]
>
> cd "/c/Users/simonpj/AppData/Local/Temp/ghctest-b2z6dfqg/test   
> spaces/rename/should_compile/T4426.run" &&  "/c/code/HEAD/bindisttest/install 
>   dir/bin/ghc.exe" -c T4426.hs -dcore-lint -dcmm-lint -no-user-package-db 
> -rtsopts -fno-warn-missed-specialisations -fshow-warning-groups 
> -fdiagnostics-color=never -fno-diagnostics-show-caret -Werror=compat 
> -dno-debug-output
>
> cd "/c/Users/simonpj/AppData/Local/Temp/ghctest-b2z6dfqg/test   
> spaces/rename/should_compile/T5592.run" && ./T5592
>
> => T9778(normal) 4001 of 6647 [0, 25, 1]
>
> cd "/c/Users/simonpj/AppData/Local/Temp/ghctest-b2z6dfqg/test   
> spaces/rename/should_compile/T9778.run" &&  "/c/code/HEAD/bindisttest/install 
>   dir/bin/ghc.exe" -c T9778.hs -dcore-lint -dcmm-lint -no-user-package-db 
> -rtsopts -fno-warn-missed-specialisations -fshow-warning-groups 
> -fdiagnostics-color=never -fno-diagnostics-show-caret -Werror=compat 
> -dno-debug-output  -fwarn-unticked-promoted-constructors
>
> => T10816(normal) 4002 of 6647 [0, 25, 1]
>
> cd "/c/Users/simonpj/AppData/Local/Temp/ghctest-b2z6dfqg/test   
> 

Using same stdout/stderr files for multiple tests?

2018-11-16 Thread Ömer Sinan Ağacan
I have a test that I want to run with different compile and runtime parameters.
I managed to reuse the source file across different tests by adding a
extra_files(['source.hs']) to the tests, but I don't know how to do the same for
stdout/stderr files. Any ideas?

In more details, I have

test.hs
test.stdout

and two tests

test('test',
 [extra_run_opts('...')],
 compile_and_run,
 [])

test('test_debug',
 [extra_run_opts('...'),
  extra_hc_opts('-debug'),
  extra_files(['test.hs'])],
 compile_and_run,
 [])

The first test works fine, but the second test fails because I don't know how to
tell it to use test.stdout as the stdout file and it looks for
test_debug.stdout.

Thanks

Ömer
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Validate on master broken.

2018-11-16 Thread Ömer Sinan Ağacan
4efd1b487e fixed the build but the test T15898 now fails when run in ghci way.

Ömer

Ömer Sinan Ağacan , 16 Kas 2018 Cum, 07:39
tarihinde şunu yazdı:
>
> This was also reported as #15900.
>
> Ömer
>
> Simon Peyton Jones via ghc-devs , 16 Kas 2018
> Cum, 02:33 tarihinde şunu yazdı:
> >
> > Bother -- my fault.  Sorry about that.  I should have
> > thought of Haddock.
> >
> > Thanks for fixing.
> >
> > Simon
> >
> > | -Original Message-
> > | From: ghc-devs  On Behalf Of Alec Theriault
> > | Sent: 15 November 2018 21:41
> > | To: Andreas Klebinger 
> > | Cc: ghc-devs@haskell.org
> > | Subject: Re: Validate on master broken.
> > |
> > | Thanks for noticing!
> > |
> > | I’m fixing this right now. The changes needed are really quite mundane…
> > |
> > | -Alec
> > |
> > | > On Nov 15, 2018, at 1:28 PM, Andreas Klebinger
> > |  wrote:
> > | >
> > | > Hello Devs,
> > | >
> > | > it seems Simons patch "Smarter HsType pretty-print for promoted
> > | datacons" broke ./validate.
> > | >
> > | >
> > | >
> > | >> I discovered that there were two copies of the PromotionFlag
> > | >>type (a boolean, with helpfully named data cons), one in
> > | >>IfaceType and one in HsType.  So I combined into one,
> > | >>PromotionFlag, and moved it to BasicTypes.
> > | >
> > | > In particular haddock seems to have depended on the changed
> > | constructors.
> > | > If anyone with access and knowledge of haddock could fix  this I would
> > | be grateful.
> > | >
> > | >
> > | > Cheers
> > | > Andreas
> > | > ___
> > | > ghc-devs mailing list
> > | > ghc-devs@haskell.org
> > | >
> > | https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.haske
> > | ll.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc-
> > | devsdata=02%7C01%7Csimonpj%40microsoft.com%7Cd72f4aad7f794904722108d6
> > | 4b43130f%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C1%7C636779148787876537
> > | mp;sdata=rdOrnWF4V6hUy%2FHm9UYaEowR2T8ND0DXuCj15rPY0CY%3Dreserved=0
> > |
> > | ___
> > | ghc-devs mailing list
> > | ghc-devs@haskell.org
> > | https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.haske
> > | ll.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc-
> > | devsdata=02%7C01%7Csimonpj%40microsoft.com%7Cd72f4aad7f794904722108d6
> > | 4b43130f%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C1%7C636779148787876537
> > | mp;sdata=rdOrnWF4V6hUy%2FHm9UYaEowR2T8ND0DXuCj15rPY0CY%3Dreserved=0
> > ___
> > ghc-devs mailing list
> > ghc-devs@haskell.org
> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Validate on master broken.

2018-11-15 Thread Ömer Sinan Ağacan
This was also reported as #15900.

Ömer

Simon Peyton Jones via ghc-devs , 16 Kas 2018
Cum, 02:33 tarihinde şunu yazdı:
>
> Bother -- my fault.  Sorry about that.  I should have
> thought of Haddock.
>
> Thanks for fixing.
>
> Simon
>
> | -Original Message-
> | From: ghc-devs  On Behalf Of Alec Theriault
> | Sent: 15 November 2018 21:41
> | To: Andreas Klebinger 
> | Cc: ghc-devs@haskell.org
> | Subject: Re: Validate on master broken.
> |
> | Thanks for noticing!
> |
> | I’m fixing this right now. The changes needed are really quite mundane…
> |
> | -Alec
> |
> | > On Nov 15, 2018, at 1:28 PM, Andreas Klebinger
> |  wrote:
> | >
> | > Hello Devs,
> | >
> | > it seems Simons patch "Smarter HsType pretty-print for promoted
> | datacons" broke ./validate.
> | >
> | >
> | >
> | >> I discovered that there were two copies of the PromotionFlag
> | >>type (a boolean, with helpfully named data cons), one in
> | >>IfaceType and one in HsType.  So I combined into one,
> | >>PromotionFlag, and moved it to BasicTypes.
> | >
> | > In particular haddock seems to have depended on the changed
> | constructors.
> | > If anyone with access and knowledge of haddock could fix  this I would
> | be grateful.
> | >
> | >
> | > Cheers
> | > Andreas
> | > ___
> | > ghc-devs mailing list
> | > ghc-devs@haskell.org
> | >
> | https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.haske
> | ll.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc-
> | devsdata=02%7C01%7Csimonpj%40microsoft.com%7Cd72f4aad7f794904722108d6
> | 4b43130f%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C1%7C636779148787876537
> | mp;sdata=rdOrnWF4V6hUy%2FHm9UYaEowR2T8ND0DXuCj15rPY0CY%3Dreserved=0
> |
> | ___
> | ghc-devs mailing list
> | ghc-devs@haskell.org
> | https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.haske
> | ll.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc-
> | devsdata=02%7C01%7Csimonpj%40microsoft.com%7Cd72f4aad7f794904722108d6
> | 4b43130f%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C1%7C636779148787876537
> | mp;sdata=rdOrnWF4V6hUy%2FHm9UYaEowR2T8ND0DXuCj15rPY0CY%3Dreserved=0
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Unused RTS symbols disappearing in some flavors?

2018-11-14 Thread Ömer Sinan Ağacan
Hi,

When I build RTS with flavors other than quick (and maybe others) unused symbols
get removed, which makes debugging harder. Does anyone know what setting to add
to build.mk to make sure symbols won't be removed? Ideally I should be able to
add one line at the end of build.mk and the symbols should not be removed
regardless of the flavor.

Thanks,

Ömer
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: [ANNOUNCE] GHC 8.4.4 released

2018-10-27 Thread Ömer Sinan Ağacan
Sorry for the typos in my previous email.

#16969 -> #15696 (https://ghc.haskell.org/trac/ghc/ticket/15696)
regressions -> regression tests
Phab:5201 -> Phab:D5201 (https://phabricator.haskell.org/D5201)

By "the primop" I mean dataToTag#.

Ömer

Ömer Sinan Ağacan , 27 Eki 2018 Cmt, 10:23
tarihinde şunu yazdı:
>
> Hi all,
>
> Just a quick update about #16969.
>
> The primop itself is buggy in 8.4 (and it should be buggy even in older
> versions -- although I haven't confirmed this) and 2 of the 3 regressions 
> added
> for it currently fail with GHC 8.4.4. I don't know what the plan is for fixing
> it in 8.4, Ben may say more about this, but I'm guessing that we'll see 
> another
> 8.4 release.
>
> So if you're using the primop directly, just don't! If you're not using it
> directly, then as David says the bug is much harder to trigger in GHC 8.4 (and
> even older versions) than in GHC 8.6, but we don't know if it's _impossible_ 
> to
> trigger in GHC 8.4 and older versions.
>
> We fixed the bug in GHC HEAD weeks ago (with Phab:5201), current investigation
> in #15696 is not blocking any releases, we're just tying up some loose ends 
> and
> doing refactoring to handle some similar primops more uniformly. This is only
> refactoring and documentation -- known bugs are already fixed.
>
> (I now realize that it would've been better to do this in a separate ticket to
> avoid confusion)
>
> Ömer
>
> Artem Pelenitsyn , 27 Eki 2018 Cmt, 00:18
> tarihinde şunu yazdı:
> >
> > David, when you say "dataToTag# issue", you mean #15696? It seems from the 
> > discussion there that it is still under investigation.
> >
> > --
> > Best, Artem
> >
> > On Fri, 26 Oct 2018 at 17:02 David Feuer  wrote:
> >>
> >> On Fri, Oct 26, 2018 at 4:43 PM Carter Schonwald
> >>  wrote:
> >> >
> >> > Hey David, i'm looking at the git history andit doesn't seem to have any 
> >> > commits between 8.4.3 and 8.4.4 related to the dataToTag issue
> >> >
> >> > does any haskell code in the while trigger the bug on 8.4 series?
> >>
> >> I don't think anyone knows. It seems clear that it's considerably
> >> easier to trigger the bug in 8.6, but
> >> as far as I can tell, there's no reason to believe that it couldn't be
> >> triggered by realistic code in
> >> 8.4.
> >> ___
> >> ghc-devs mailing list
> >> ghc-devs@haskell.org
> >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
> >
> > ___
> > ghc-devs mailing list
> > ghc-devs@haskell.org
> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: [ANNOUNCE] GHC 8.4.4 released

2018-10-27 Thread Ömer Sinan Ağacan
Hi all,

Just a quick update about #16969.

The primop itself is buggy in 8.4 (and it should be buggy even in older
versions -- although I haven't confirmed this) and 2 of the 3 regressions added
for it currently fail with GHC 8.4.4. I don't know what the plan is for fixing
it in 8.4, Ben may say more about this, but I'm guessing that we'll see another
8.4 release.

So if you're using the primop directly, just don't! If you're not using it
directly, then as David says the bug is much harder to trigger in GHC 8.4 (and
even older versions) than in GHC 8.6, but we don't know if it's _impossible_ to
trigger in GHC 8.4 and older versions.

We fixed the bug in GHC HEAD weeks ago (with Phab:5201), current investigation
in #15696 is not blocking any releases, we're just tying up some loose ends and
doing refactoring to handle some similar primops more uniformly. This is only
refactoring and documentation -- known bugs are already fixed.

(I now realize that it would've been better to do this in a separate ticket to
avoid confusion)

Ömer

Artem Pelenitsyn , 27 Eki 2018 Cmt, 00:18
tarihinde şunu yazdı:
>
> David, when you say "dataToTag# issue", you mean #15696? It seems from the 
> discussion there that it is still under investigation.
>
> --
> Best, Artem
>
> On Fri, 26 Oct 2018 at 17:02 David Feuer  wrote:
>>
>> On Fri, Oct 26, 2018 at 4:43 PM Carter Schonwald
>>  wrote:
>> >
>> > Hey David, i'm looking at the git history andit doesn't seem to have any 
>> > commits between 8.4.3 and 8.4.4 related to the dataToTag issue
>> >
>> > does any haskell code in the while trigger the bug on 8.4 series?
>>
>> I don't think anyone knows. It seems clear that it's considerably
>> easier to trigger the bug in 8.6, but
>> as far as I can tell, there's no reason to believe that it couldn't be
>> triggered by realistic code in
>> 8.4.
>> ___
>> ghc-devs mailing list
>> ghc-devs@haskell.org
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Why align all pinned array payloads on 16 bytes?

2018-10-22 Thread Ömer Sinan Ağacan
Thanks for all the answers. Another surprising thing about the pinned object
allocation primops is that the aligned allocator allows alignment to bytes,
rather than to words (the documentation doesn't say whether it's words or bytes,
but it can be seen from the code that it's actually aligning to the given
byte). Is there a use case for this or people mostly use alignment on word
boundaries?

Ömer

Sven Panne , 17 Eki 2018 Çar, 10:29 tarihinde şunu yazdı:
>
> Am Di., 16. Okt. 2018 um 23:18 Uhr schrieb Simon Marlow :
>>
>> I vaguely recall that this was because 16 byte alignment is the minimum you 
>> need for certain foreign types, and it's what malloc() does.  Perhaps check 
>> the FFI spec and the guarantees that mallocForeignPtrBytes and friends 
>> provide?
>
>
> mallocForeignPtrBytes is defined in terms of malloc 
> (https://www.haskell.org/onlinereport/haskell2010/haskellch29.html#x37-28400029.1.3),
>  which in turn has the following guarantee 
> (https://www.haskell.org/onlinereport/haskell2010/haskellch31.html#x39-28700031.1):
>
>"... All storage allocated by functions that allocate based on a size in 
> bytes must be sufficiently aligned for any of the basic foreign types that 
> fits into the newly allocated storage. ..."
>
> The largest basic foreign types are Word64/Double and probably 
> Ptr/FunPtr/StablePtr 
> (https://www.haskell.org/onlinereport/haskell2010/haskellch8.html#x15-178.7),
>  so per spec you need at least an 8-byte alignement. But in an SSE-world I 
> would be *very* reluctant to use an alignment less strict than 16 bytes, 
> otherwise people will probably hate you... :-]
>
> Cheers,
>S.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Why align all pinned array payloads on 16 bytes?

2018-10-11 Thread Ömer Sinan Ağacan
Hi,

I just found out we currently align all pinned array payloads to 16 bytes and
I'm wondering why. I don't see any comments/notes on this, and it's also not
part of the primop documentation. We also have another primop for aligned
allocation: newAlignedPinnedByteArray#. Given that alignment behavior of
newPinnedByteArray# is not documented and we have another one for aligned
allocation, perhaps we can remove alignment in newPinnedByteArray#.

Does anyone remember what was the motivation for always aligning pinned arrays?

Thanks

Ömer
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Shall we make -dsuppress-uniques default?

2018-10-05 Thread Ömer Sinan Ağacan
I asked this on IRC and didn't hear a lot of opposition, so as the next step
I'd like to ask ghc-devs.

I literally never need the details on uniques that we currently print by
default. I either don't care about variables too much (when not comparing the
output with some other output), or I need -dsuppress-uniques (when comparing
outputs). The problem is I have to remember to add -dsuppress-uniques if I'm
going to compare the outputs, and if I decide to compare outputs after the fact
I need to re-generate them with -dsuppress-uniques. This takes time and effort.

If you're also of the same opinion I suggest making -dsuppress-uniques default,
and providing a -dno-suppress-uniques (if it doesn't already exist).

Ömer
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


nofib oldest GHC to support?

2018-09-30 Thread Ömer Sinan Ağacan
Do we have a policy on the oldest GHC to support in nofib? I'm currently doing
some hacking on nofib to parse some new info printed by a modified GHC, and I
think we can do a lot of cleaning (at the very least remove some regexes and
parsers) if we decide on which GHCs to support.

I checked the README and RunningNoFib wiki page but couldn't see anything
relevant.

Thanks

Ömer
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: [Haskell] [ANNOUNCE] GHC 8.6.1 released

2018-09-22 Thread Ömer Sinan Ağacan
Thanks to everyone involved with the release!

It's a bit sad that we don't have DWARF bindists this time (we had those for
8.4.2 and 8.4.3). DWARF builds make debugging GHC much easier, and because
runtime panics also include stack traces in DWARF builds tickets reported for
those bindists tend to be more helpful.

Would it be possible to provide DWARF bindists at a later date?

Ömer

Ben Gamari , 22 Eyl 2018 Cmt, 03:57 tarihinde şunu yazdı:
>
> Hello everyone,
>
> The GHC team is pleased to announce the availability of GHC 8.6.1, the
> fourth major release in the GHC 8 series. The source distribution, binary
> distributions, and documentation for this release are available at
>
> https://downloads.haskell.org/~ghc/8.6.1
>
> The 8.6 release fixes over 400 bugs from the 8.4 series and introduces a
> number of exciting features. These most notably include:
>
>  * A new deriving mechanism, `deriving via`, providing a convenient way
>for users to extend Haskell's typeclass deriving mechanism
>
>  * Quantified constraints, allowing forall quantification in constraint 
> contexts
>
>  * An early version of the GHCi `:doc` command
>
>  * The `ghc-heap-view` package, allowing introspection into the
>structure of GHC's heap
>
>  * Valid hole fit hints, helping the user to find terms to fill typed
>holes in their programs
>
>  * The BlockArguments extension, allowing the `$` operator to be omitted
>in some unambiguous contexts
>
>  * An exciting new plugin mechanism, source plugins, allowing plugins to
>inspect and modify a wide variety of compiler representations.
>
>  * Improved recompilation checking when plugins are used
>
>  * Significantly better handling of macOS linker command size limits,
>avoiding linker errors while linking large projects
>
>  * The next phase of the MonadFail proposal, enabling
>-XMonadFailDesugaring by default
>
> A full list of the changes in this release can be found in the
> release notes:
>
> 
> https://downloads.haskell.org/~ghc/8.6.1/docs/html/users_guide/8.6.1-notes.html
>
> Perhaps of equal importance, GHC 8.6 is the second major release made
> under GHC's accelerated six-month release schedule and the first set of
> binary distributions built primarily using our new continuous
> integration scheme. While the final 8.6 release is around three weeks
> later than initially scheduled due to late-breaking bug reports, we
> expect that the 8.8 release schedule shouldn't be affected.
>
> Thanks to everyone who has contributed to developing, documenting, and
> testing this release!
>
> As always, let us know if you encounter trouble.
>
>
> How to get it
> ~
>
> The easy way is to go to the web page, which should be self-explanatory:
>
> https://www.haskell.org/ghc/
>
> We supply binary builds in the native package format for many
> platforms, and the source distribution is available from the same
> place.
>
> Packages will appear as they are built - if the package for your
> system isn't available yet, please try again later.
>
>
> Background
> ~~
>
> Haskell is a standard lazy functional programming language.
>
> GHC is a state-of-the-art programming suite for Haskell.  Included is
> an optimising compiler generating efficient code for a variety of
> platforms, together with an interactive system for convenient, quick
> development.  The distribution includes space and time profiling
> facilities, a large collection of libraries, and support for various
> language extensions, including concurrency, exceptions, and foreign
> language interfaces. GHC is distributed under a BSD-style open source license.
>
> A wide variety of Haskell related resources (tutorials, libraries,
> specifications, documentation, compilers, interpreters, references,
> contact information, links to research groups) are available from the
> Haskell home page (see below).
>
>
> On-line GHC-related resources
> ~~
>
> Relevant URLs on the World-Wide Web:
>
> GHC home page  https://www.haskell.org/ghc/
> GHC developers' home page  https://ghc.haskell.org/trac/ghc/
> Haskell home page  https://www.haskell.org/
>
>
> Supported Platforms
> ~~~
>
> The list of platforms we support, and the people responsible for them,
> is here:
>
>https://ghc.haskell.org/trac/ghc/wiki/Contributors
>
> Ports to other platforms are possible with varying degrees of
> difficulty.  The Building Guide describes how to go about porting to a
> new platform:
>
> https://ghc.haskell.org/trac/ghc/wiki/Building
>
>
> Developers
> ~~
>
> We welcome new contributors.  Instructions on accessing our source
> code repository, and getting started with hacking on GHC, are
> available from the GHC's developer's site run by Trac:
>
>   https://ghc.haskell.org/trac/ghc/
>
>
> Mailing lists
> ~
>
> We run mailing lists for GHC users and bug reports; to subscribe, use
> the web interfaces at
>
> 

Re: Heap allocation in the RTS

2018-09-20 Thread Ömer Sinan Ağacan
> Are you saying allocateMightFail ignores the usual nursery size?

Right, so normally in Cmm you'd do something like `if (Hp + size > HpLim) {
trigger GC }`, but allocateMightFail adds more blocks to the nursery instead.
Maybe just look at the code, it's quite simple.

I don't know how to check Hp in the RTS and trigger a GC. I'd do that part in
Cmm as there are lots of Cmm functions that do this already (in PrimOps.cmm
and maybe elsewhere).

Ömer
David Feuer , 20 Eyl 2018 Per, 12:50 tarihinde
şunu yazdı:
>
> I'm not sure I understand. Are you saying allocateMightFail ignores the usual 
> nursery size? That's not my intention. It would actually be just fine to 
> simply fail if GC would be required--I can then back off, fail out to CMM, 
> trigger a GC there, and retry. Or I could perform an extra heap check before 
> I start; that's a little silly, but I doubt it'll be expensive enough to 
> really matter here.
>
> On Thu, Sep 20, 2018, 5:42 AM Ömer Sinan Ağacan  wrote:
>>
>> allocateMightFail allocates new nursery blocks as long as you don't hit the
>> heap limit, so it returns NULL less often than you might think. In 
>> particular,
>> it doesn't return NULL when the nursery is full, instead it allocates a new
>> block and adds it to the nursery.
>>
>> I'd do the GC triggering part in Cmm code instead of C code -- I'm not sure 
>> if
>> it's possible to do this in C code. There should be some functions in
>> PrimOps.cmm that do heap allocation, maybe look there. I'd look for uses of
>> ALLOC_PRIM. The file HeapStackCheck.cmm may also be helpful (may at least 
>> give
>> an idea of how a GC is triggered).
>>
>> Ömer
>>
>> David Feuer , 20 Eyl 2018 Per, 12:34 tarihinde
>> şunu yazdı:
>> >
>> > If it returns NULL, then I need to back off what I'm doing and trigger a 
>> > GC. How do I do the latter?
>> >
>> > On Thu, Sep 20, 2018, 5:31 AM Ömer Sinan Ağacan  
>> > wrote:
>> >>
>> >> allocateMightFail does the heap check for you and returns NULL. For the 
>> >> current
>> >> capability you can use MyCapability() in Cmm and pass the value to the RTS
>> >> function you're implementing.
>> >>
>> >> Ömer
>> >>
>> >> David Feuer , 20 Eyl 2018 Per, 12:26 tarihinde
>> >> şunu yazdı:
>> >> >
>> >> > Aha! Okay. How do I get the current capability to pass to that? I guess 
>> >> > I should probably perform a heap check before calling lookupSta bleName 
>> >> > for simplicity, at least to start.
>> >> >
>> >> > On Thu, Sep 20, 2018, 5:16 AM Ömer Sinan Ağacan  
>> >> > wrote:
>> >> >>
>> >> >> Have you seen Storage.c:allocateMightFail ?
>> >> >>
>> >> >> Ömer
>> >> >>
>> >> >>
>> >> >> David Feuer , 20 Eyl 2018 Per, 11:32 tarihinde
>> >> >> şunu yazdı:
>> >> >> >
>> >> >> > I'm working on re-implementing the stable name system. For the new 
>> >> >> > design, it seems much cleaner to allocate stable names in 
>> >> >> > lookupStableName (in rts/StableName.c) rather than in the C-- code 
>> >> >> > that calls it. But I haven't seen RTS code that does heap 
>> >> >> > allocation. Is it possible? If so, how do I do it?
>> >> >> > ___
>> >> >> > ghc-devs mailing list
>> >> >> > ghc-devs@haskell.org
>> >> >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Heap allocation in the RTS

2018-09-20 Thread Ömer Sinan Ağacan
allocateMightFail allocates new nursery blocks as long as you don't hit the
heap limit, so it returns NULL less often than you might think. In particular,
it doesn't return NULL when the nursery is full, instead it allocates a new
block and adds it to the nursery.

I'd do the GC triggering part in Cmm code instead of C code -- I'm not sure if
it's possible to do this in C code. There should be some functions in
PrimOps.cmm that do heap allocation, maybe look there. I'd look for uses of
ALLOC_PRIM. The file HeapStackCheck.cmm may also be helpful (may at least give
an idea of how a GC is triggered).

Ömer

David Feuer , 20 Eyl 2018 Per, 12:34 tarihinde
şunu yazdı:
>
> If it returns NULL, then I need to back off what I'm doing and trigger a GC. 
> How do I do the latter?
>
> On Thu, Sep 20, 2018, 5:31 AM Ömer Sinan Ağacan  wrote:
>>
>> allocateMightFail does the heap check for you and returns NULL. For the 
>> current
>> capability you can use MyCapability() in Cmm and pass the value to the RTS
>> function you're implementing.
>>
>> Ömer
>>
>> David Feuer , 20 Eyl 2018 Per, 12:26 tarihinde
>> şunu yazdı:
>> >
>> > Aha! Okay. How do I get the current capability to pass to that? I guess I 
>> > should probably perform a heap check before calling lookupSta bleName for 
>> > simplicity, at least to start.
>> >
>> > On Thu, Sep 20, 2018, 5:16 AM Ömer Sinan Ağacan  
>> > wrote:
>> >>
>> >> Have you seen Storage.c:allocateMightFail ?
>> >>
>> >> Ömer
>> >>
>> >>
>> >> David Feuer , 20 Eyl 2018 Per, 11:32 tarihinde
>> >> şunu yazdı:
>> >> >
>> >> > I'm working on re-implementing the stable name system. For the new 
>> >> > design, it seems much cleaner to allocate stable names in 
>> >> > lookupStableName (in rts/StableName.c) rather than in the C-- code that 
>> >> > calls it. But I haven't seen RTS code that does heap allocation. Is it 
>> >> > possible? If so, how do I do it?
>> >> > ___
>> >> > ghc-devs mailing list
>> >> > ghc-devs@haskell.org
>> >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Heap allocation in the RTS

2018-09-20 Thread Ömer Sinan Ağacan
allocateMightFail does the heap check for you and returns NULL. For the current
capability you can use MyCapability() in Cmm and pass the value to the RTS
function you're implementing.

Ömer

David Feuer , 20 Eyl 2018 Per, 12:26 tarihinde
şunu yazdı:
>
> Aha! Okay. How do I get the current capability to pass to that? I guess I 
> should probably perform a heap check before calling lookupSta bleName for 
> simplicity, at least to start.
>
> On Thu, Sep 20, 2018, 5:16 AM Ömer Sinan Ağacan  wrote:
>>
>> Have you seen Storage.c:allocateMightFail ?
>>
>> Ömer
>>
>>
>> David Feuer , 20 Eyl 2018 Per, 11:32 tarihinde
>> şunu yazdı:
>> >
>> > I'm working on re-implementing the stable name system. For the new design, 
>> > it seems much cleaner to allocate stable names in lookupStableName (in 
>> > rts/StableName.c) rather than in the C-- code that calls it. But I haven't 
>> > seen RTS code that does heap allocation. Is it possible? If so, how do I 
>> > do it?
>> > ___
>> > ghc-devs mailing list
>> > ghc-devs@haskell.org
>> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Feedback request: GHC performance test-suite

2018-09-13 Thread Ömer Sinan Ağacan
Thanks for doing this. I think it's great that someone's working on the test
suite.

About storing in git notes: what is the format here? If I want to see numbers
for a given commit does `git notes show ` show me all the numbers at
that commit or only the differences from previous commit? Do we have an example
note to look at to get an idea of what the format is?

I think it'd be better if we could make a more concrete plan about the
"drifting" problem mentioned in the future work before merging this. I don't
know what are the plans here (how much time will be spent on this work) but that
future work may never happen, so perhaps for the time being we want to keep the
reference values (not sure where to keep them though) to avoid making some
programs run 20x slower in a few years. I think it'd also be better if we could
highlight this change more in the proposal as I think it's an important one.

Ömer

David Eichmann , 13 Eyl 2018 Per, 11:25
tarihinde şunu yazdı:
>
> Hello all,
>
> I've recently resumed some work started by Jared Weakly on the GHC test
> suite. This specifically regards performance tests. The work aims to,
> among some other things, reduce manual work and to log performance test
> results from our CI server. The proposed change is described in more
> detail on this wiki page:
> https://ghc.haskell.org/trac/ghc/wiki/Performance/Tests. I'd appreciate
> any feedback or questions on this.
>
> Thank you and have a great day,
> David Eichmann
>
> --
> David Eichmann, Haskell Consultant
> Well-Typed LLP, http://www.well-typed.com
>
> Registered in England & Wales, OC335890
> 118 Wymering Mansions, Wymering Road, London W9 2NF, England
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Running GHC 7.10.2 on Ubuntu 18.04 ?

2018-09-05 Thread Ömer Sinan Ağacan
Thanks, that worked!

Ömer

Vanessa McHale , 5 Eyl 2018 Çar, 17:35 tarihinde
şunu yazdı:

> GHC 7.10.3 works fine for me when I use the hvr ppa
> https://launchpad.net/~hvr/+archive/ubuntu/ghc
>
> On 09/05/2018 09:23 AM, Ömer Sinan Ağacan wrote:
>
> Hi,
>
> I'm trying to use GHC 7.10.2 (the Debian 8 bindist from haskell.org) on Ubuntu
> 18.04. It's currently failing with linker errors when I compile `main = return
> ()`:
>
> /usr/bin/x86_64-linux-gnu-ld:
> /home/omer/ghc_bins/ghc-7.10.3-bin/lib/ghc-7.10.3/base_HQfYBxpPvuw8OunzQu6JGM/libHSbase-4.8.2.0-HQfYBxpPvuw8OunzQu6JGM.a(Base__5.o):
> relocation R_X86_64_32S against `.text' can not be used when making a
> PIE object; recompile with -fPIC
> /usr/bin/x86_64-linux-gnu-ld:
> /home/omer/ghc_bins/ghc-7.10.3-bin/lib/ghc-7.10.3/base_HQfYBxpPvuw8OunzQu6JGM/libHSbase-4.8.2.0-HQfYBxpPvuw8OunzQu6JGM.a(Base__125.o):
> relocation R_X86_64_32S against `.text' can not be used when making a
> PIE object; recompile with -fPIC
> /usr/bin/x86_64-linux-gnu-ld:
> /home/omer/ghc_bins/ghc-7.10.3-bin/lib/ghc-7.10.3/base_HQfYBxpPvuw8OunzQu6JGM/libHSbase-4.8.2.0-HQfYBxpPvuw8OunzQu6JGM.a(Signal__13.o):
> relocation R_X86_64_32S against `.text' can not be used when making a
> PIE object; recompile with -fPIC
> /usr/bin/x86_64-linux-gnu-ld:
> /home/omer/ghc_bins/ghc-7.10.3-bin/lib/ghc-7.10.3/base_HQfYBxpPvuw8OunzQu6JGM/libHSbase-4.8.2.0-HQfYBxpPvuw8OunzQu6JGM.a(Sync__199.o):
> relocation R_X86_64_32S against `.text' can not be used when making a
> PIE object; recompile with -fPIC
> /usr/bin/x86_64-linux-gnu-ld:
> /home/omer/ghc_bins/ghc-7.10.3-bin/lib/ghc-7.10.3/base_HQfYBxpPvuw8OunzQu6JGM/libHSbase-4.8.2.0-HQfYBxpPvuw8OunzQu6JGM.a(Exception__170.o):
> relocation R_X86_64_32S against symbol `stg_bh_upd_frame_info' can not
> be used when making a PIE object; recompile with -fPIC
>
> I'm getting about 700 of these. Does anyone know a way to make GHC 7.10.2 work
> on Ubuntu 18.04? Not sure if related but the ld version is
>
> ~ $ /usr/bin/x86_64-linux-gnu-ld --version
> GNU ld (GNU Binutils for Ubuntu) 2.30
> Copyright (C) 2018 Free Software Foundation, Inc.
> This program is free software; you may redistribute it under the terms of
> the GNU General Public License version 3 or (at your option) a
> later version.
> This program has absolutely no warranty.
>
> Thanks,
>
> Ömer
> ___
> ghc-devs mailing 
> listghc-devs@haskell.orghttp://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
>
> --
>
>
>
> *Vanessa McHale*
> Functional Compiler Engineer | Chicago, IL
>
> Website: www.iohk.io <http://iohk.io>
> Twitter: @vamchale
> PGP Key ID: 4209B7B5
>
> [image: Input Output] <http://iohk.io>
>
> [image: Twitter] <https://twitter.com/InputOutputHK> [image: Github]
> <https://github.com/input-output-hk> [image: LinkedIn]
> <https://www.linkedin.com/company/input-output-global>
>
>
> This e-mail and any file transmitted with it are confidential and intended
> solely for the use of the recipient(s) to whom it is addressed.
> Dissemination, distribution, and/or copying of the transmission by anyone
> other than the intended recipient(s) is prohibited. If you have received
> this transmission in error please notify IOHK immediately and delete it
> from your system. E-mail transmissions cannot be guaranteed to be secure or
> error free. We do not accept liability for any loss, damage, or error
> arising from this transmission
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Running GHC 7.10.2 on Ubuntu 18.04 ?

2018-09-05 Thread Ömer Sinan Ağacan
Hi,

I'm trying to use GHC 7.10.2 (the Debian 8 bindist from haskell.org) on Ubuntu
18.04. It's currently failing with linker errors when I compile `main = return
()`:

/usr/bin/x86_64-linux-gnu-ld:
/home/omer/ghc_bins/ghc-7.10.3-bin/lib/ghc-7.10.3/base_HQfYBxpPvuw8OunzQu6JGM/libHSbase-4.8.2.0-HQfYBxpPvuw8OunzQu6JGM.a(Base__5.o):
relocation R_X86_64_32S against `.text' can not be used when making a
PIE object; recompile with -fPIC
/usr/bin/x86_64-linux-gnu-ld:
/home/omer/ghc_bins/ghc-7.10.3-bin/lib/ghc-7.10.3/base_HQfYBxpPvuw8OunzQu6JGM/libHSbase-4.8.2.0-HQfYBxpPvuw8OunzQu6JGM.a(Base__125.o):
relocation R_X86_64_32S against `.text' can not be used when making a
PIE object; recompile with -fPIC
/usr/bin/x86_64-linux-gnu-ld:
/home/omer/ghc_bins/ghc-7.10.3-bin/lib/ghc-7.10.3/base_HQfYBxpPvuw8OunzQu6JGM/libHSbase-4.8.2.0-HQfYBxpPvuw8OunzQu6JGM.a(Signal__13.o):
relocation R_X86_64_32S against `.text' can not be used when making a
PIE object; recompile with -fPIC
/usr/bin/x86_64-linux-gnu-ld:
/home/omer/ghc_bins/ghc-7.10.3-bin/lib/ghc-7.10.3/base_HQfYBxpPvuw8OunzQu6JGM/libHSbase-4.8.2.0-HQfYBxpPvuw8OunzQu6JGM.a(Sync__199.o):
relocation R_X86_64_32S against `.text' can not be used when making a
PIE object; recompile with -fPIC
/usr/bin/x86_64-linux-gnu-ld:
/home/omer/ghc_bins/ghc-7.10.3-bin/lib/ghc-7.10.3/base_HQfYBxpPvuw8OunzQu6JGM/libHSbase-4.8.2.0-HQfYBxpPvuw8OunzQu6JGM.a(Exception__170.o):
relocation R_X86_64_32S against symbol `stg_bh_upd_frame_info' can not
be used when making a PIE object; recompile with -fPIC

I'm getting about 700 of these. Does anyone know a way to make GHC 7.10.2 work
on Ubuntu 18.04? Not sure if related but the ld version is

~ $ /usr/bin/x86_64-linux-gnu-ld --version
GNU ld (GNU Binutils for Ubuntu) 2.30
Copyright (C) 2018 Free Software Foundation, Inc.
This program is free software; you may redistribute it under the terms of
the GNU General Public License version 3 or (at your option) a
later version.
This program has absolutely no warranty.

Thanks,

Ömer
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Build entirely broken

2018-08-22 Thread Ömer Sinan Ağacan
I wonder if this could be a problem with your tree? I just did

git pull
git submodule update --init
make distclean
./boot
./configure
make

and it worked. Note that I tried with "quick" build flavor.

Ömer

Simon Peyton Jones via ghc-devs , 22 Ağu 2018
Çar, 13:12 tarihinde şunu yazdı:
>
> From a clean build I’m getting this
>
> ld: cannot find libraries/ghc-prim/dist-install/build/GHC/CString.o: No such 
> file or directory
>
> ld: cannot find libraries/ghc-prim/dist-install/build/GHC/Classes.o: No such 
> file or directory
>
> ld: cannot find libraries/ghc-prim/dist-install/build/GHC/Debug.o: No such 
> file or directory
>
> ld: cannot find libraries/ghc-prim/dist-install/build/GHC/IntWord64.o: No 
> such file or directory
>
> ld: cannot find libraries/ghc-prim/dist-install/build/GHC/Magic.o: No such 
> file or directory
>
> ld: cannot find libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.o: 
> No such file or directory
>
> ld: cannot find libraries/ghc-prim/dist-install/build/GHC/Tuple.o: No such 
> file or directory
>
> ld: cannot find libraries/ghc-prim/dist-install/build/GHC/Types.o: No such 
> file or directory
>
>
>
> And indeed those files don’t exist.  In earlier builds they were there.
>
> The command line for eg CString was
>
> "inplace/bin/ghc-stage1" -hisuf hi -osuf  o -hcsuf hc -static  -O0 -H64m 
> -Wall -fllvm-fill-undef-with-garbage-Werror-this-unit-id 
> ghc-prim-0.5.3 -hide-all-packages -i -ilibraries/ghc-prim/. 
> -ilibraries/ghc-prim/dist-install/build 
> -Ilibraries/ghc-prim/dist-install/build 
> -ilibraries/ghc-prim/dist-install/build/./autogen 
> -Ilibraries/ghc-prim/dist-install/build/./autogen -Ilibraries/ghc-prim/.
> -optP-include 
> -optPlibraries/ghc-prim/dist-install/build/./autogen/cabal_macros.h 
> -package-id rts -this-unit-id ghc-prim -XHaskell2010 -O -dcore-lint 
> -dno-debug-output -ticky  -no-user-package-db -rtsopts  -Wno-trustworthy-safe 
> -Wno-deprecated-flags -Wnoncanonical-monad-instances  -odir 
> libraries/ghc-prim/dist-install/build -hidir 
> libraries/ghc-prim/dist-install/build -stubdir 
> libraries/ghc-prim/dist-install/build -split-objs  -dynamic-too -c 
> libraries/ghc-prim/./GHC/CString.hs -o 
> libraries/ghc-prim/dist-install/build/GHC/CString.o -dyno 
> libraries/ghc-prim/dist-install/build/GHC/CString.dyn_o
>
> which looks right.
>
> What’s going on?
>
> This has me totally stalled.
>
> Simonb
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Can't build 8.6.1-beta1 with debugging.

2018-08-14 Thread Ömer Sinan Ağacan
Hi Mateusz,

> /usr/bin/ld.gold: error: cannot find -lHSrts_thr_debug_p

We currently don't ship GHC with profiling + debug + threaded runtime. See my
previous email on this:
https://mail.haskell.org/pipermail/ghc-devs/2018-July/015982.html

I show a way to enable these runtimes in
https://ghc.haskell.org/trac/ghc/ticket/15508

Hope this helps,

Ömer

Mateusz Kowalczyk , 14 Ağu 2018 Sal, 09:19
tarihinde şunu yazdı:
>
> Hi all,
>
> I wanted to try our codebase with 8.6. I happened to already have 
> 8.6.0.20180714 ready so I started with that.
>
> Compilation went well but I got a segfault when running a benchmark we have 
> with profiling on. GDB told me the segfault was in stg_ap_p_info in 
> AutoApply.cmm which as I understand is generated. Strange but OK... I decided 
> to try unprofiled with LLVM to see if some LLVM issue we had with current 
> version (7.10.x) has gone away. Sadly I encountered a segfault during regular 
> compilation (clean build, LLVM 6.0). I had no debugging symbols.
>
> Next I decided to try the beta1 version. I copied mk/build.mk.sample into 
> mk/build.mk and added the following:
>
> GhcDebugged = YES
> GhcStage1HcOpts = -DDEBUG
> GhcStage2HcOpts = -DDEBUG
>
> I checked out the 8.6.1-beta1 tag then ran following.
>
> fuuzetsu@rubin:~/programming/ghc$ echo $PREFIX
> /usr/local/ghc/ghc-8.6.1-beta1
> fuuzetsu@rubin:~/programming/ghc$ ./configure --prefix=$PREFIX && make -j4 && 
> make install
>
> After some time I was met with:
>
> /usr/bin/ld.gold: error: cannot find -lHSrts_thr_debug_p
>
> Also a long chain of messages about undefined refs you can find at [1].
>
> Considering I haven't touched any settings beyond adding debug flags, it 
> surprised me a little that I can't build successfully. Please advise and feel 
> free to ask for any specs.
>
> uname -a:
> Linux rubin.tsuru.it 4.15.0-30-generic #32~16.04.1-Ubuntu SMP Thu Jul 26 
> 20:25:39 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
>
>
> [1]: https://gist.github.com/Fuuzetsu/7a65a963dd625386d13938ba1e22af5c
>
> --
> Mateusz K.
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Any ways to test a GHC build against large set of packages (including test suites)?

2018-08-10 Thread Ömer Sinan Ağacan
I also briefly looked at hackage.head. As far as I understand it doesn't
out-of-the-box provide a way to build a large set of packages, right? It'd be
useful if I had a package that I want to test against GHC HEAD but currently it
doesn't help me, unless I'm missing something.

Ömer

Ömer Sinan Ağacan , 10 Ağu 2018 Cum, 11:39
tarihinde şunu yazdı:
>
> Hi,
>
> This is working great, I just generated my first report. One problem is 
> stm-2.4
> doesn't compile with GHC HEAD, we need stm-2.5.0.0. But that's not published 
> on
> Hackage yet, and latest nightly still uses stm-2.4.5.0. I wonder if there's
> anything that can be done about this. Apparently stm blocks 82 packages (I
> don't know if that's counting transitively or just packages that are directly
> blocked by stm). Any ideas about this?
>
> Ömer
>
> Ömer Sinan Ağacan , 9 Ağu 2018 Per, 14:45
> tarihinde şunu yazdı:
> >
> > Ah, I now realize that that command is supposed to print that output. I'll
> > continue following the steps and keep you updated if I get stuck again.
> >
> > Ömer
> >
> > Ömer Sinan Ağacan , 9 Ağu 2018 Per, 13:20
> > tarihinde şunu yazdı:
> > >
> > > Hi Manuel,
> > >
> > > I'm trying stackage-head. I'm following the steps for the scheduled build 
> > > in
> > > .circleci/config.yml. So far steps I took:
> > >
> > > - Installed ghc-head (from [1]) to ~/ghc-head
> > > - Installed stackage-build-plan, stackage-curator and stackage-head (with
> > >   -fdev) from git repos, using stack.
> > > - export BUILD_PLAN=nightly-2018-07-30 (from config.yml)
> > > - curl 
> > > https://ghc-artifacts.s3.amazonaws.com/nightly/validate-x86_64-linux/latest/metadata.json
> > > --output metadata.json
> > > - curl 
> > > https://raw.githubusercontent.com/fpco/stackage-nightly/master/$BUILD_PLAN.yaml
> > > --output $BUILD_PLAN.yaml
> > >
> > > Now I'm doing
> > >
> > > - ./.local/bin/stackage-head already-seen --target $BUILD_PLAN
> > > --ghc-metadata metadata.json --outdir build-reports
> > >
> > > but it's failing with
> > >
> > > The combination of target and commit is new to me
> > >
> > > Any ideas what I'm doing wrong?
> > >
> > > Thanks
> > >
> > > [1]: 
> > > https://ghc-artifacts.s3.amazonaws.com/nightly/validate-x86_64-linux/latest/bindist.tar.xz
> > >
> > > Ömer
> > >
> > > Ömer Sinan Ağacan , 7 Ağu 2018 Sal, 23:28
> > > tarihinde şunu yazdı:
> > > >
> > > > Thanks for both suggestions. I'll try both and see which one works 
> > > > better.
> > > >
> > > > Ömer
> > > >
> > > > Manuel M T Chakravarty , 7 Ağu 2018 Sal, 18:15
> > > > tarihinde şunu yazdı:
> > > > >
> > > > > Hi Ömer,
> > > > >
> > > > > This is exactly the motivation for the Stackage HEAD works that we 
> > > > > have pushed at Tweag I/O in the context of the GHC DevOps group. Have 
> > > > > a look at
> > > > >
> > > > >   https://github.com/tweag/stackage-head
> > > > >
> > > > > and also the blog post from when the first version went live:
> > > > >
> > > > >   https://www.tweag.io/posts/2018-04-17-stackage-head-is-live.html
> > > > >
> > > > > Cheers,
> > > > > Manuel
> > > > >
> > > > > > Am 06.08.2018 um 09:40 schrieb Ömer Sinan Ağacan 
> > > > > > :
> > > > > >
> > > > > > Hi,
> > > > > >
> > > > > > I'd like to test some GHC builds + some compile and runtime flag 
> > > > > > combinations
> > > > > > against a large set of packages by building them and running test 
> > > > > > suites. For
> > > > > > this I need
> > > > > >
> > > > > > - A set of packages that are known to work with latest GHC
> > > > > > - A way to build them and run their test suites (if I could specify 
> > > > > > compile and
> > > > > >  runtime flags that'd be even better)
> > > > > >
> > > > > > I think stackage can serve as (1) but I don't know how to do (2). 
> > > > > > Can anyone
> > > > > > point me to the right direction? I vaguely remember some nix-based 
> > > > > > solution for
> > > > > > this that was being discussed on the IRC channel, but can't recall 
> > > > > > any details.
> > > > > >
> > > > > > Thanks,
> > > > > >
> > > > > > Ömer
> > > > > > ___
> > > > > > ghc-devs mailing list
> > > > > > ghc-devs@haskell.org
> > > > > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
> > > > >
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: How to test master after breaking changes

2018-08-10 Thread Ömer Sinan Ağacan
Hi Artem,

I think currently the best you could do is to clone primitive's git repo
locally and install it from there, using `cd primitive; cabal install
--with-ghc=...`.

Note that you can run the test suite without these dependencies. The driver
skips the test if a dependency is not found. See also #15137.

(I wonder what CI is doing about this, I'm guessing it doesn't install
dependencies so some of the tests are not run on CI)

Ömer

Artem Pelenitsyn , 4 Ağu 2018 Cmt, 20:51
tarihinde şunu yazdı:
>
> Hello devs,
>
> Wiki page on testing says that in order to run all tests you have to install 
> additional packages:
>
> https://ghc.haskell.org/trac/ghc/wiki/Building/RunningTests/Running#AdditionalPackages
>
> and kindly provides a command to do this:
>
> cabal install --with-compiler=`pwd`/inplace/bin/ghc-stage2 
> --package-db=`pwd`/inplace/lib/package.conf.d mtl parallel parsec primitive 
> QuickCheck random regex-compat syb stm utf8-string vector
>
> After the af9b744bb one of the packages, primitive, does not build anymore. 
> At least, its last released version on Hackage. I see that the problem has 
> been fixed on primitive's master (a2af610). But what should I do to actually 
> test master branch of GHC now?
>
> --
> Best wishes,
> Artem
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Any ways to test a GHC build against large set of packages (including test suites)?

2018-08-10 Thread Ömer Sinan Ağacan
Hi,

This is working great, I just generated my first report. One problem is stm-2.4
doesn't compile with GHC HEAD, we need stm-2.5.0.0. But that's not published on
Hackage yet, and latest nightly still uses stm-2.4.5.0. I wonder if there's
anything that can be done about this. Apparently stm blocks 82 packages (I
don't know if that's counting transitively or just packages that are directly
blocked by stm). Any ideas about this?

Ömer

Ömer Sinan Ağacan , 9 Ağu 2018 Per, 14:45
tarihinde şunu yazdı:
>
> Ah, I now realize that that command is supposed to print that output. I'll
> continue following the steps and keep you updated if I get stuck again.
>
> Ömer
>
> Ömer Sinan Ağacan , 9 Ağu 2018 Per, 13:20
> tarihinde şunu yazdı:
> >
> > Hi Manuel,
> >
> > I'm trying stackage-head. I'm following the steps for the scheduled build in
> > .circleci/config.yml. So far steps I took:
> >
> > - Installed ghc-head (from [1]) to ~/ghc-head
> > - Installed stackage-build-plan, stackage-curator and stackage-head (with
> >   -fdev) from git repos, using stack.
> > - export BUILD_PLAN=nightly-2018-07-30 (from config.yml)
> > - curl 
> > https://ghc-artifacts.s3.amazonaws.com/nightly/validate-x86_64-linux/latest/metadata.json
> > --output metadata.json
> > - curl 
> > https://raw.githubusercontent.com/fpco/stackage-nightly/master/$BUILD_PLAN.yaml
> > --output $BUILD_PLAN.yaml
> >
> > Now I'm doing
> >
> > - ./.local/bin/stackage-head already-seen --target $BUILD_PLAN
> > --ghc-metadata metadata.json --outdir build-reports
> >
> > but it's failing with
> >
> > The combination of target and commit is new to me
> >
> > Any ideas what I'm doing wrong?
> >
> > Thanks
> >
> > [1]: 
> > https://ghc-artifacts.s3.amazonaws.com/nightly/validate-x86_64-linux/latest/bindist.tar.xz
> >
> > Ömer
> >
> > Ömer Sinan Ağacan , 7 Ağu 2018 Sal, 23:28
> > tarihinde şunu yazdı:
> > >
> > > Thanks for both suggestions. I'll try both and see which one works better.
> > >
> > > Ömer
> > >
> > > Manuel M T Chakravarty , 7 Ağu 2018 Sal, 18:15
> > > tarihinde şunu yazdı:
> > > >
> > > > Hi Ömer,
> > > >
> > > > This is exactly the motivation for the Stackage HEAD works that we have 
> > > > pushed at Tweag I/O in the context of the GHC DevOps group. Have a look 
> > > > at
> > > >
> > > >   https://github.com/tweag/stackage-head
> > > >
> > > > and also the blog post from when the first version went live:
> > > >
> > > >   https://www.tweag.io/posts/2018-04-17-stackage-head-is-live.html
> > > >
> > > > Cheers,
> > > > Manuel
> > > >
> > > > > Am 06.08.2018 um 09:40 schrieb Ömer Sinan Ağacan 
> > > > > :
> > > > >
> > > > > Hi,
> > > > >
> > > > > I'd like to test some GHC builds + some compile and runtime flag 
> > > > > combinations
> > > > > against a large set of packages by building them and running test 
> > > > > suites. For
> > > > > this I need
> > > > >
> > > > > - A set of packages that are known to work with latest GHC
> > > > > - A way to build them and run their test suites (if I could specify 
> > > > > compile and
> > > > >  runtime flags that'd be even better)
> > > > >
> > > > > I think stackage can serve as (1) but I don't know how to do (2). Can 
> > > > > anyone
> > > > > point me to the right direction? I vaguely remember some nix-based 
> > > > > solution for
> > > > > this that was being discussed on the IRC channel, but can't recall 
> > > > > any details.
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Ömer
> > > > > ___
> > > > > ghc-devs mailing list
> > > > > ghc-devs@haskell.org
> > > > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
> > > >
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Any ways to test a GHC build against large set of packages (including test suites)?

2018-08-09 Thread Ömer Sinan Ağacan
Ah, I now realize that that command is supposed to print that output. I'll
continue following the steps and keep you updated if I get stuck again.

Ömer

Ömer Sinan Ağacan , 9 Ağu 2018 Per, 13:20
tarihinde şunu yazdı:
>
> Hi Manuel,
>
> I'm trying stackage-head. I'm following the steps for the scheduled build in
> .circleci/config.yml. So far steps I took:
>
> - Installed ghc-head (from [1]) to ~/ghc-head
> - Installed stackage-build-plan, stackage-curator and stackage-head (with
>   -fdev) from git repos, using stack.
> - export BUILD_PLAN=nightly-2018-07-30 (from config.yml)
> - curl 
> https://ghc-artifacts.s3.amazonaws.com/nightly/validate-x86_64-linux/latest/metadata.json
> --output metadata.json
> - curl 
> https://raw.githubusercontent.com/fpco/stackage-nightly/master/$BUILD_PLAN.yaml
> --output $BUILD_PLAN.yaml
>
> Now I'm doing
>
> - ./.local/bin/stackage-head already-seen --target $BUILD_PLAN
> --ghc-metadata metadata.json --outdir build-reports
>
> but it's failing with
>
> The combination of target and commit is new to me
>
> Any ideas what I'm doing wrong?
>
> Thanks
>
> [1]: 
> https://ghc-artifacts.s3.amazonaws.com/nightly/validate-x86_64-linux/latest/bindist.tar.xz
>
> Ömer
>
> Ömer Sinan Ağacan , 7 Ağu 2018 Sal, 23:28
> tarihinde şunu yazdı:
> >
> > Thanks for both suggestions. I'll try both and see which one works better.
> >
> > Ömer
> >
> > Manuel M T Chakravarty , 7 Ağu 2018 Sal, 18:15
> > tarihinde şunu yazdı:
> > >
> > > Hi Ömer,
> > >
> > > This is exactly the motivation for the Stackage HEAD works that we have 
> > > pushed at Tweag I/O in the context of the GHC DevOps group. Have a look at
> > >
> > >   https://github.com/tweag/stackage-head
> > >
> > > and also the blog post from when the first version went live:
> > >
> > >   https://www.tweag.io/posts/2018-04-17-stackage-head-is-live.html
> > >
> > > Cheers,
> > > Manuel
> > >
> > > > Am 06.08.2018 um 09:40 schrieb Ömer Sinan Ağacan :
> > > >
> > > > Hi,
> > > >
> > > > I'd like to test some GHC builds + some compile and runtime flag 
> > > > combinations
> > > > against a large set of packages by building them and running test 
> > > > suites. For
> > > > this I need
> > > >
> > > > - A set of packages that are known to work with latest GHC
> > > > - A way to build them and run their test suites (if I could specify 
> > > > compile and
> > > >  runtime flags that'd be even better)
> > > >
> > > > I think stackage can serve as (1) but I don't know how to do (2). Can 
> > > > anyone
> > > > point me to the right direction? I vaguely remember some nix-based 
> > > > solution for
> > > > this that was being discussed on the IRC channel, but can't recall any 
> > > > details.
> > > >
> > > > Thanks,
> > > >
> > > > Ömer
> > > > ___
> > > > ghc-devs mailing list
> > > > ghc-devs@haskell.org
> > > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
> > >
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Any ways to test a GHC build against large set of packages (including test suites)?

2018-08-09 Thread Ömer Sinan Ağacan
Hi Manuel,

I'm trying stackage-head. I'm following the steps for the scheduled build in
.circleci/config.yml. So far steps I took:

- Installed ghc-head (from [1]) to ~/ghc-head
- Installed stackage-build-plan, stackage-curator and stackage-head (with
  -fdev) from git repos, using stack.
- export BUILD_PLAN=nightly-2018-07-30 (from config.yml)
- curl 
https://ghc-artifacts.s3.amazonaws.com/nightly/validate-x86_64-linux/latest/metadata.json
--output metadata.json
- curl 
https://raw.githubusercontent.com/fpco/stackage-nightly/master/$BUILD_PLAN.yaml
--output $BUILD_PLAN.yaml

Now I'm doing

- ./.local/bin/stackage-head already-seen --target $BUILD_PLAN
--ghc-metadata metadata.json --outdir build-reports

but it's failing with

The combination of target and commit is new to me

Any ideas what I'm doing wrong?

Thanks

[1]: 
https://ghc-artifacts.s3.amazonaws.com/nightly/validate-x86_64-linux/latest/bindist.tar.xz

Ömer

Ömer Sinan Ağacan , 7 Ağu 2018 Sal, 23:28
tarihinde şunu yazdı:
>
> Thanks for both suggestions. I'll try both and see which one works better.
>
> Ömer
>
> Manuel M T Chakravarty , 7 Ağu 2018 Sal, 18:15
> tarihinde şunu yazdı:
> >
> > Hi Ömer,
> >
> > This is exactly the motivation for the Stackage HEAD works that we have 
> > pushed at Tweag I/O in the context of the GHC DevOps group. Have a look at
> >
> >   https://github.com/tweag/stackage-head
> >
> > and also the blog post from when the first version went live:
> >
> >   https://www.tweag.io/posts/2018-04-17-stackage-head-is-live.html
> >
> > Cheers,
> > Manuel
> >
> > > Am 06.08.2018 um 09:40 schrieb Ömer Sinan Ağacan :
> > >
> > > Hi,
> > >
> > > I'd like to test some GHC builds + some compile and runtime flag 
> > > combinations
> > > against a large set of packages by building them and running test suites. 
> > > For
> > > this I need
> > >
> > > - A set of packages that are known to work with latest GHC
> > > - A way to build them and run their test suites (if I could specify 
> > > compile and
> > >  runtime flags that'd be even better)
> > >
> > > I think stackage can serve as (1) but I don't know how to do (2). Can 
> > > anyone
> > > point me to the right direction? I vaguely remember some nix-based 
> > > solution for
> > > this that was being discussed on the IRC channel, but can't recall any 
> > > details.
> > >
> > > Thanks,
> > >
> > > Ömer
> > > ___
> > > ghc-devs mailing list
> > > ghc-devs@haskell.org
> > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
> >
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Any ways to test a GHC build against large set of packages (including test suites)?

2018-08-07 Thread Ömer Sinan Ağacan
Thanks for both suggestions. I'll try both and see which one works better.

Ömer

Manuel M T Chakravarty , 7 Ağu 2018 Sal, 18:15
tarihinde şunu yazdı:
>
> Hi Ömer,
>
> This is exactly the motivation for the Stackage HEAD works that we have 
> pushed at Tweag I/O in the context of the GHC DevOps group. Have a look at
>
>   https://github.com/tweag/stackage-head
>
> and also the blog post from when the first version went live:
>
>   https://www.tweag.io/posts/2018-04-17-stackage-head-is-live.html
>
> Cheers,
> Manuel
>
> > Am 06.08.2018 um 09:40 schrieb Ömer Sinan Ağacan :
> >
> > Hi,
> >
> > I'd like to test some GHC builds + some compile and runtime flag 
> > combinations
> > against a large set of packages by building them and running test suites. 
> > For
> > this I need
> >
> > - A set of packages that are known to work with latest GHC
> > - A way to build them and run their test suites (if I could specify compile 
> > and
> >  runtime flags that'd be even better)
> >
> > I think stackage can serve as (1) but I don't know how to do (2). Can anyone
> > point me to the right direction? I vaguely remember some nix-based solution 
> > for
> > this that was being discussed on the IRC channel, but can't recall any 
> > details.
> >
> > Thanks,
> >
> > Ömer
> > ___
> > ghc-devs mailing list
> > ghc-devs@haskell.org
> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Any ways to test a GHC build against large set of packages (including test suites)?

2018-08-06 Thread Ömer Sinan Ağacan
Hi,

I'd like to test some GHC builds + some compile and runtime flag combinations
against a large set of packages by building them and running test suites. For
this I need

- A set of packages that are known to work with latest GHC
- A way to build them and run their test suites (if I could specify compile and
  runtime flags that'd be even better)

I think stackage can serve as (1) but I don't know how to do (2). Can anyone
point me to the right direction? I vaguely remember some nix-based solution for
this that was being discussed on the IRC channel, but can't recall any details.

Thanks,

Ömer
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: isAlive() too conservative -- does it cause leaks?

2018-07-19 Thread Ömer Sinan Ağacan
I created https://ghc.haskell.org/trac/ghc/ticket/15417 for this.

Ömer

Simon Marlow , 19 Tem 2018 Per, 13:52 tarihinde şunu yazdı:
>
> On 19 July 2018 at 11:09, Ömer Sinan Ağacan  wrote:
>>
>> Hi Simon,
>>
>> Currently isAlive considers all static closures as being alive. The code:
>>
>> // ignore static closures
>> //
>> // ToDo: This means we never look through IND_STATIC, which means
>> // isRetainer needs to handle the IND_STATIC case rather than
>> // raising an error.
>> //
>> // ToDo: for static closures, check the static link field.
>> // Problem here is that we sometimes don't set the link field, eg.
>> // for static closures with an empty SRT or CONSTR_NOCAFs.
>> //
>> if (!HEAP_ALLOCED_GC(q)) {
>> return p;
>> }
>>
>> I'd expect this to cause leaks when e.g. key of a WEAK is a static object. Is
>> this not the case?
>
>
> Correct, I believe weak pointers to static objects don't work (not sure if 
> there's a ticket for this, but if not there should be).
>
>> I think this is easy to fix but I may be missing something
>> and wanted to ask before investing into it. The idea:
>>
>> - Evacuate all static objects in evacuate() (including the ones with no SRTs)
>>   (assuming all static objects have a STATIC_FIELD, is this really the case?)
>
>
> This would be expensive. We deliberately don't touch the static objects in a 
> minor GC because it adds potentially tens of ms to the GC time, and the 
> optimisation to avoid evacuating the static objects with no SRTs is an 
> important one.
>
> Cheers
> Simon
>
>> - In isAlive() check if (STATIC_FIELD & static_flag) != 0. If it is then the
>>   object is alive.
>>
>> Am I missing anything?
>>
>> Thanks,
>>
>> Ömer
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


isAlive() too conservative -- does it cause leaks?

2018-07-19 Thread Ömer Sinan Ağacan
Hi Simon,

Currently isAlive considers all static closures as being alive. The code:

// ignore static closures
//
// ToDo: This means we never look through IND_STATIC, which means
// isRetainer needs to handle the IND_STATIC case rather than
// raising an error.
//
// ToDo: for static closures, check the static link field.
// Problem here is that we sometimes don't set the link field, eg.
// for static closures with an empty SRT or CONSTR_NOCAFs.
//
if (!HEAP_ALLOCED_GC(q)) {
return p;
}

I'd expect this to cause leaks when e.g. key of a WEAK is a static object. Is
this not the case? I think this is easy to fix but I may be missing something
and wanted to ask before investing into it. The idea:

- Evacuate all static objects in evacuate() (including the ones with no SRTs)
  (assuming all static objects have a STATIC_FIELD, is this really the case?)

- In isAlive() check if (STATIC_FIELD & static_flag) != 0. If it is then the
  object is alive.

Am I missing anything?

Thanks,

Ömer
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Write barrier for stack updates?

2018-07-18 Thread Ömer Sinan Ağacan
Ah, this makes so much sense, thanks. I was looking at call sites of
recordMutable, recordMutableCap etc. and forgot about recordClosureMutated
which is apparently what dirty_STACK calls.

Thanks,

Ömer

Simon Marlow , 18 Tem 2018 Çar, 10:52 tarihinde şunu yazdı:
>
> Hi Ömer,
>
> The write barrier is the function `dirty_STACK()` here: 
> https://phabricator.haskell.org/diffusion/GHC/browse/master/rts%2Fsm%2FStorage.c$1133-1140
>
> If you grep for `dirty_STACK` you'll see it being called everywhere we mutate 
> a STACK, in particular in the scheduler just before running a thread: 
> https://phabricator.haskell.org/diffusion/GHC/browse/master/rts%2FSchedule.c$412
>
> We don't call the write barrier in the code generator or from primops, 
> because at that point the thread is already running and has already been 
> marked dirty. If we GC and mark the stack clean, then it will be marked dirty 
> again by the scheduler before we start running it.
>
> Cheers
> Simon
>
> On 17 July 2018 at 20:45, Ömer Sinan Ağacan  wrote:
>>
>> Hi Simon,
>>
>> I'm a bit confused about stack updates in generated code and write barriers.
>> Because stacks are mutable (we push new stuff or maybe even update existing
>> frames?) it seems to me that we need one these two, similar to other mutable
>> objects:
>>
>> - Always keep all stacks in mut_lists
>> - Add write barriers before updates
>>
>> However looking at some of the primops like catch# and the code generator 
>> that
>> generates code that pushes update frames I can't see any write barriers and 
>> the
>> GC doesn't always add stacks to mut_lists (unlike e.g. MUT_ARR_PTRS). I also
>> thought maybe we add a stack to a mut_list when we switch to the TSO that 
>> owns
>> it or we park the TSO, but I don't see anything relevant in Schedule.c or
>> ThreadPaused.c. So I'm lost. Could you say a few words about how we deal with
>> mutated stacks in the GC, so that if an old stack points to a young object we
>> don't collect the young object in a minor GC?
>>
>> Thanks,
>>
>> Ömer
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Write barrier for stack updates?

2018-07-17 Thread Ömer Sinan Ağacan
Hi Simon,

I'm a bit confused about stack updates in generated code and write barriers.
Because stacks are mutable (we push new stuff or maybe even update existing
frames?) it seems to me that we need one these two, similar to other mutable
objects:

- Always keep all stacks in mut_lists
- Add write barriers before updates

However looking at some of the primops like catch# and the code generator that
generates code that pushes update frames I can't see any write barriers and the
GC doesn't always add stacks to mut_lists (unlike e.g. MUT_ARR_PTRS). I also
thought maybe we add a stack to a mut_list when we switch to the TSO that owns
it or we park the TSO, but I don't see anything relevant in Schedule.c or
ThreadPaused.c. So I'm lost. Could you say a few words about how we deal with
mutated stacks in the GC, so that if an old stack points to a young object we
don't collect the young object in a minor GC?

Thanks,

Ömer
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Not possible to build debug + prof, oversight or expected?

2018-07-03 Thread Ömer Sinan Ağacan
I just realized GHC 8.4.2 doesn't ship a debug + prof RTS, and HEAD doesn't
build it with release build (no build.mk) or with prof flavor. It seems to me
that this is a bug, because some debug files in RTS (e.g. Printer.c) actually
check PROFILING macro in a few places and print different stuff depending on
that.

Anyone know if this is intentional? It's really inconvenient that I can't use
debug runtime to debug profiling build issues..

(I also realized that we dont' generate .sos for some ways, which means -dynamic
is also don't combine with e.g. -prof)

Thanks

Ömer
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: How do I add ghc-prim as a dep for ghc?

2018-06-27 Thread Ömer Sinan Ağacan
It turns out there are two GHC packages: ghc and ghc-bin. I needed to add to
ghc-bin but I wasn't aware of it so added to ghc.

Ömer

Ömer Sinan Ağacan , 26 Haz 2018 Sal, 21:58
tarihinde şunu yazdı:
>
> I did make distclean; ./boot; ./configure ... no luck. Checked ghc.cabal also.
>
> Ömer
>
>
> Ben Gamari , 26 Haz 2018 Sal, 21:39 tarihinde şunu yazdı:
> >
> > Ömer Sinan Ağacan  writes:
> >
> > > I'm trying to add ghc-prim as a dependency to the ghc package. So far 
> > > I've done
> > > these changes:
> > >
> > snip
> >
> > > Any ideas what else to edit?
> > >
> > Did you rerun ./configure after modifying ghc.cabal.in? I would
> > double-check that ghc.cabal contains the dependency.
> >
> > Cheers,
> >
> > - Ben
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: How do I add ghc-prim as a dep for ghc?

2018-06-26 Thread Ömer Sinan Ağacan
I did make distclean; ./boot; ./configure ... no luck. Checked ghc.cabal also.

Ömer


Ben Gamari , 26 Haz 2018 Sal, 21:39 tarihinde şunu yazdı:
>
> Ömer Sinan Ağacan  writes:
>
> > I'm trying to add ghc-prim as a dependency to the ghc package. So far I've 
> > done
> > these changes:
> >
> snip
>
> > Any ideas what else to edit?
> >
> Did you rerun ./configure after modifying ghc.cabal.in? I would
> double-check that ghc.cabal contains the dependency.
>
> Cheers,
>
> - Ben
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Booting ghc with system-wide installed Cabal?

2018-06-26 Thread Ömer Sinan Ağacan
We don't have to break anyone's workflow. We could introduce an ENV var or a
flag for this.

> Also, how would one return to a pristine state if it was done that way?

By unsetting the ENV var or not using the flag.

Ömer
Niklas Larsson , 26 Haz 2018 Sal, 10:26
tarihinde şunu yazdı:
>
> Installing stuff system-wide without doing ‘make install’ would break my 
> expectations for how the build works. Also, how would one return to a 
> pristine state if it was done that way?
>
> // Niklas
>
> > 26 juni 2018 kl. 08:57 skrev Ömer Sinan Ağacan :
> >
> > Currently we have to build Cabal from scratch after every make clean. 
> > Ideally I
> > should be able to skip this step by installing the correct versions of Cabal
> > and cabal-install system-wide, but as far as I can see we currently doesn't
> > support this. Any ideas on how to make this work?
> >
> > Thanks,
> >
> > Ömer
> > ___
> > ghc-devs mailing list
> > ghc-devs@haskell.org
> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Booting ghc with system-wide installed Cabal?

2018-06-26 Thread Ömer Sinan Ağacan
Currently we have to build Cabal from scratch after every make clean. Ideally I
should be able to skip this step by installing the correct versions of Cabal
and cabal-install system-wide, but as far as I can see we currently doesn't
support this. Any ideas on how to make this work?

Thanks,

Ömer
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


How do I add ghc-prim as a dep for ghc?

2018-06-26 Thread Ömer Sinan Ağacan
I'm trying to add ghc-prim as a dependency to the ghc package. So far I've done
these changes:

diff --git a/compiler/ghc.cabal.in b/compiler/ghc.cabal.in
index 01628dcad1..b9c3b3d02b 100644
--- a/compiler/ghc.cabal.in
+++ b/compiler/ghc.cabal.in
@@ -65,7 +65,8 @@ Library
ghc-boot   == @ProjectVersionMunged@,
ghc-boot-th == @ProjectVersionMunged@,
ghc-heap   == @ProjectVersionMunged@,
-   ghci == @ProjectVersionMunged@
+   ghci == @ProjectVersionMunged@,
+   ghc-prim

 if os(windows)
 Build-Depends: Win32  >= 2.3 && < 2.7
diff --git a/ghc.mk b/ghc.mk
index c0b99c00f4..26c6e86c02 100644
--- a/ghc.mk
+++ b/ghc.mk
@@ -420,7 +420,8 @@ else # CLEANING
 # programs such as GHC and ghc-pkg, that we do not assume the stage0
 # compiler already has installed (or up-to-date enough).

-PACKAGES_STAGE0 = binary text transformers mtl parsec Cabal/Cabal
hpc ghc-boot-th ghc-boot template-haskell ghc-heap ghci
+PACKAGES_STAGE0 = binary text transformers mtl parsec Cabal/Cabal hpc \
+ ghc-boot-th ghc-boot
template-haskell ghc-heap ghci ghc-prim
 ifeq "$(Windows_Host)" "NO"
 PACKAGES_STAGE0 += terminfo
 endif

But I'm getting this error:

ghc-cabal: Encountered missing dependencies:
ghc-prim ==0.5.3

Any ideas what else to edit?

Thanks,

Ömer
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Scavenging SRTs in scavenge_one

2018-06-22 Thread Ömer Sinan Ağacan
OK, finally everything makes sense I think. I was very confused by the code and
previous emails where you said:

> Large objects can only be primitive objects, like MUT_ARR_PTRS, allocated by
> the RTS, and none of these have SRTs.

I was pointing out that this is not entirely correct; we allocate large stacks.
But as you say scavenge_one() handles that case by scavenging stack SRTs.

So in summary:

- scavenge_one() is called to scavenge mut_lists and large objects.
- When scavenging mut_lists no need to scaveng SRTs (see previous emails)
- When scavenging large objects we know that certain objects can't be large
  (i.e. FUN, THUNK), but some others can (i.e. STACK), so scavenge_one()
  scavenges stack SRTs but does not scavenge FUN and THUNK SRTs.

Ömer

Simon Marlow , 21 Haz 2018 Per, 21:27 tarihinde şunu yazdı:
>
> When scavenge_one() sees a STACK, it calls scavenge_stack() which traverses 
> the stack frames, including their SRTs.
>
> So I don't understand what's going wrong for you - how are the SRTs not being 
> traversed?
>
> Cheers
> Simon
>
> On 21 June 2018 at 11:58, Ömer Sinan Ağacan  wrote:
>>
>> Here's an example where we allocate a large (4K) stack:
>>
>> >>> bt
>> #0  allocateMightFail (cap=0x7f366808cfc0 ,
>> n=4096) at rts/sm/Storage.c:876
>> #1  0x7f3667e4a85d in allocate (cap=0x7f366808cfc0
>> , n=4096) at rts/sm/Storage.c:849
>> #2  0x7f3667e16f46 in threadStackOverflow (cap=0x7f366808cfc0
>> , tso=0x4200152a68) at rts/Threads.c:600
>> #3  0x7f3667e12a64 in schedule
>> (initialCapability=0x7f366808cfc0 , task=0x78c970) at
>> rts/Schedule.c:520
>> #4  0x7f3667e1215f in scheduleWaitThread (tso=0x4200105388,
>> ret=0x0, pcap=0x7ffef40dce78) at rts/Schedule.c:2533
>> #5  0x7f3667e25685 in rts_evalLazyIO (cap=0x7ffef40dce78,
>> p=0x736ef8, ret=0x0) at rts/RtsAPI.c:530
>> #6  0x7f3667e25f7a in hs_main (argc=16, argv=0x7ffef40dd0a8,
>> main_closure=0x736ef8, rts_config=...) t rts/RtsMain.c:72
>> #7  0x004f738f in main ()
>>
>> This is based on an old tree so source locations may not be correct, it's 
>> this
>> code in threadStackOverflow():
>>
>> // Charge the current thread for allocating stack.  Stack usage is
>> // non-deterministic, because the chunk boundaries might vary from
>> // run to run, but accounting for this is better than not
>> // accounting for it, since a deep recursion will otherwise not be
>> // subject to allocation limits.
>> cap->r.rCurrentTSO = tso;
>> new_stack = (StgStack*) allocate(cap, chunk_size);
>> cap->r.rCurrentTSO = NULL;
>>
>> SET_HDR(new_stack, _STACK_info, old_stack->header.prof.ccs);
>> TICK_ALLOC_STACK(chunk_size);
>>
>> Ömer
>> Ömer Sinan Ağacan , 21 Haz 2018 Per, 13:42
>> tarihinde şunu yazdı:
>> >
>> > > Large objects can only be primitive objects, like MUT_ARR_PTRS, 
>> > > allocated by
>> > > the RTS, and none of these have SRTs.
>> >
>> > Is is not possible to allocate a large STACK? I'm currently observing this 
>> > in
>> > gdb:
>> >
>> > >>> call *Bdescr(0x4200ec9000)
>> > $2 = {
>> >   start = 0x4200ec9000,
>> >   free = 0x4200ed1000,
>> >   link = 0x4200100e80,
>> >   u = {
>> > back = 0x4200103980,
>> > bitmap = 0x4200103980,
>> > scan = 0x4200103980
>> >   },
>> >   gen = 0x77b4b8,
>> >   gen_no = 1,
>> >   dest_no = 1,
>> >   node = 0,
>> >   flags = 1027, <-- BF_LARGE | BF_EVACUTED | ...
>> >   blocks = 8,
>> >   _padding = {[0] = 0, [1] = 0, [2] = 0}
>> > }
>> >
>> > >>> call printClosure(0x4200ec9000)
>> > 0x4200ec9000: STACK
>> >
>> > >>> call checkClosure(0x4200ec9000)
>> > $3 = 4096 -- makes sense, larger than 3277 bytes
>> >
>> > So I have a large STACK object, and STACKs can refer to static objects. But
>> > when we scavenge this object we don't scavenge its SRTs because we use
>> > scavenge_one(). This seems wrong to me.
>> >
>> > Ömer
>> >
>> > Simon Marlow , 20 Haz 2018 Çar, 14:32 tarihinde şunu 
>> > yazdı:
>> > >
>> > > Interesting point. I don't think there are any large objects with SRTs, 
>> > > but we should document the invariant because we're relying on it.
>> > >
>

Re: Scavenging SRTs in scavenge_one

2018-06-21 Thread Ömer Sinan Ağacan
Here's an example where we allocate a large (4K) stack:

>>> bt
#0  allocateMightFail (cap=0x7f366808cfc0 ,
n=4096) at rts/sm/Storage.c:876
#1  0x7f3667e4a85d in allocate (cap=0x7f366808cfc0
, n=4096) at rts/sm/Storage.c:849
#2  0x7f3667e16f46 in threadStackOverflow (cap=0x7f366808cfc0
, tso=0x4200152a68) at rts/Threads.c:600
#3  0x7f3667e12a64 in schedule
(initialCapability=0x7f366808cfc0 , task=0x78c970) at
rts/Schedule.c:520
#4  0x7f3667e1215f in scheduleWaitThread (tso=0x4200105388,
ret=0x0, pcap=0x7ffef40dce78) at rts/Schedule.c:2533
#5  0x7f3667e25685 in rts_evalLazyIO (cap=0x7ffef40dce78,
p=0x736ef8, ret=0x0) at rts/RtsAPI.c:530
#6  0x7f3667e25f7a in hs_main (argc=16, argv=0x7ffef40dd0a8,
main_closure=0x736ef8, rts_config=...) t rts/RtsMain.c:72
#7  0x004f738f in main ()

This is based on an old tree so source locations may not be correct, it's this
code in threadStackOverflow():

// Charge the current thread for allocating stack.  Stack usage is
// non-deterministic, because the chunk boundaries might vary from
// run to run, but accounting for this is better than not
// accounting for it, since a deep recursion will otherwise not be
// subject to allocation limits.
cap->r.rCurrentTSO = tso;
new_stack = (StgStack*) allocate(cap, chunk_size);
cap->r.rCurrentTSO = NULL;

SET_HDR(new_stack, _STACK_info, old_stack->header.prof.ccs);
TICK_ALLOC_STACK(chunk_size);

Ömer
Ömer Sinan Ağacan , 21 Haz 2018 Per, 13:42
tarihinde şunu yazdı:
>
> > Large objects can only be primitive objects, like MUT_ARR_PTRS, allocated by
> > the RTS, and none of these have SRTs.
>
> Is is not possible to allocate a large STACK? I'm currently observing this in
> gdb:
>
> >>> call *Bdescr(0x4200ec9000)
> $2 = {
>   start = 0x4200ec9000,
>   free = 0x4200ed1000,
>   link = 0x4200100e80,
>   u = {
> back = 0x4200103980,
> bitmap = 0x4200103980,
> scan = 0x4200103980
>   },
>   gen = 0x77b4b8,
>   gen_no = 1,
>   dest_no = 1,
>   node = 0,
>   flags = 1027, <-- BF_LARGE | BF_EVACUTED | ...
>   blocks = 8,
>   _padding = {[0] = 0, [1] = 0, [2] = 0}
> }
>
> >>> call printClosure(0x4200ec9000)
> 0x4200ec9000: STACK
>
> >>> call checkClosure(0x4200ec9000)
> $3 = 4096 -- makes sense, larger than 3277 bytes
>
> So I have a large STACK object, and STACKs can refer to static objects. But
> when we scavenge this object we don't scavenge its SRTs because we use
> scavenge_one(). This seems wrong to me.
>
> Ömer
>
> Simon Marlow , 20 Haz 2018 Çar, 14:32 tarihinde şunu 
> yazdı:
> >
> > Interesting point. I don't think there are any large objects with SRTs, but 
> > we should document the invariant because we're relying on it.
> >
> > Large objects can only be primitive objects, like MUT_ARR_PTRS, allocated 
> > by the RTS, and none of these have SRTs.
> >
> > We did have plans to allocate memory for large dynamic objects using 
> > `allocate()` from compiled code, in which case we could have large objects 
> > that could be THUNK, FUN, etc. and could have an SRT, in which case we 
> > would need to revisit this.  You might want to take a look at Note [big 
> > objects] in GCUtils.c, which is relevant here.
> >
> > Cheers
> > Simon
> >
> >
> > On 20 June 2018 at 09:20, Ömer Sinan Ağacan  wrote:
> >>
> >> Hi Simon,
> >>
> >> I'm confused about this code again. You said
> >>
> >> > scavenge_one() is only used for a non-major collection, where we aren't
> >> > traversing SRTs.
> >>
> >> But I think this is not true; scavenge_one() is also used to scavenge large
> >> objects (in scavenge_large()), which are scavenged even in major GCs. So it
> >> seems like we never really scavenge SRTs of large objects. This doesn't 
> >> look
> >> right to me. Am I missing anything? Can large objects not refer to static
> >> objects?
> >>
> >> Thanks
> >>
> >> Ömer
> >>
> >> Ömer Sinan Ağacan , 2 May 2018 Çar, 09:03
> >> tarihinde şunu yazdı:
> >> >
> >> > Thanks Simon, this is really helpful.
> >> >
> >> > > If you look at scavenge_fun_srt() and co, you'll see that they return
> >> > > immediately if !major_gc.
> >> >
> >> > Thanks for pointing this out -- I didn't realize it's returning early 
> >> > when
> >> > !major_gc and this caused a lot of confusion. Now everything m

Re: Scavenging SRTs in scavenge_one

2018-06-21 Thread Ömer Sinan Ağacan
> Large objects can only be primitive objects, like MUT_ARR_PTRS, allocated by
> the RTS, and none of these have SRTs.

Is is not possible to allocate a large STACK? I'm currently observing this in
gdb:

>>> call *Bdescr(0x4200ec9000)
$2 = {
  start = 0x4200ec9000,
  free = 0x4200ed1000,
  link = 0x4200100e80,
  u = {
back = 0x4200103980,
bitmap = 0x4200103980,
scan = 0x4200103980
  },
  gen = 0x77b4b8,
  gen_no = 1,
  dest_no = 1,
  node = 0,
  flags = 1027, <-- BF_LARGE | BF_EVACUTED | ...
  blocks = 8,
  _padding = {[0] = 0, [1] = 0, [2] = 0}
}

>>> call printClosure(0x4200ec9000)
0x4200ec9000: STACK

>>> call checkClosure(0x4200ec9000)
$3 = 4096 -- makes sense, larger than 3277 bytes

So I have a large STACK object, and STACKs can refer to static objects. But
when we scavenge this object we don't scavenge its SRTs because we use
scavenge_one(). This seems wrong to me.

Ömer

Simon Marlow , 20 Haz 2018 Çar, 14:32 tarihinde şunu yazdı:
>
> Interesting point. I don't think there are any large objects with SRTs, but 
> we should document the invariant because we're relying on it.
>
> Large objects can only be primitive objects, like MUT_ARR_PTRS, allocated by 
> the RTS, and none of these have SRTs.
>
> We did have plans to allocate memory for large dynamic objects using 
> `allocate()` from compiled code, in which case we could have large objects 
> that could be THUNK, FUN, etc. and could have an SRT, in which case we would 
> need to revisit this.  You might want to take a look at Note [big objects] in 
> GCUtils.c, which is relevant here.
>
> Cheers
> Simon
>
>
> On 20 June 2018 at 09:20, Ömer Sinan Ağacan  wrote:
>>
>> Hi Simon,
>>
>> I'm confused about this code again. You said
>>
>> > scavenge_one() is only used for a non-major collection, where we aren't
>> > traversing SRTs.
>>
>> But I think this is not true; scavenge_one() is also used to scavenge large
>> objects (in scavenge_large()), which are scavenged even in major GCs. So it
>> seems like we never really scavenge SRTs of large objects. This doesn't look
>> right to me. Am I missing anything? Can large objects not refer to static
>> objects?
>>
>> Thanks
>>
>> Ömer
>>
>> Ömer Sinan Ağacan , 2 May 2018 Çar, 09:03
>> tarihinde şunu yazdı:
>> >
>> > Thanks Simon, this is really helpful.
>> >
>> > > If you look at scavenge_fun_srt() and co, you'll see that they return
>> > > immediately if !major_gc.
>> >
>> > Thanks for pointing this out -- I didn't realize it's returning early when
>> > !major_gc and this caused a lot of confusion. Now everything makes sense.
>> >
>> > I'll add a note for scavenging SRTs and refer to it in relevant code and 
>> > submit
>> > a diff.
>> >
>> > Ömer
>> >
>> > 2018-05-01 22:10 GMT+03:00 Simon Marlow :
>> > > Your explanation is basically right. scavenge_one() is only used for a
>> > > non-major collection, where we aren't traversing SRTs. Admittedly this 
>> > > is a
>> > > subtle point that could almost certainly be documented better, I probably
>> > > just overlooked it.
>> > >
>> > > More inline:
>> > >
>> > > On 1 May 2018 at 10:26, Ömer Sinan Ağacan  wrote:
>> > >>
>> > >> I have an idea but it doesn't explain everything;
>> > >>
>> > >> SRTs are used to collect CAFs, and CAFs are always added to the oldest
>> > >> generation's mut_list when allocated [1].
>> > >>
>> > >> When we're scavenging a mut_list we know we're not doing a major GC, and
>> > >> because mut_list of oldest generation has all the newly allocated CAFs,
>> > >> which
>> > >> will be scavenged anyway, no need to scavenge SRTs for those.
>> > >>
>> > >> Also, static objects are always evacuated to the oldest gen [2], so any
>> > >> CAFs
>> > >> that are alive but not in the mut_list of the oldest gen will stay alive
>> > >> after
>> > >> a non-major GC, again no need to scavenge SRTs to keep these alive.
>> > >>
>> > >> This also explains why it's OK to not collect static objects (and not
>> > >> treat
>> > >> them as roots) in non-major GCs.
>> > >>
>> > >> However this doesn't explain
>> > >>
>> > >> - Why it's OK to scav

Re: DEBUG-on

2018-06-18 Thread Ömer Sinan Ağacan
If we're going to test with a DEBUG-enabled compiler we may also want to enable
sanity checks. I've been recently using this a lot and it really catches a lot
of bugs that can go unnoticed without sanity checks. I recently filed #15241
for some of the tests that currently fail the sanity checks.

Ömer

Simon Peyton Jones via ghc-devs , 18 Haz 2018
Pzt, 11:35 tarihinde şunu yazdı:
>
> Ben
>
> We don’t really test with a DEBUG-enabled compiler.  And yet, those 
> assertions are all there for a reason.
>
> In our CI infrastructure, I wonder if we might do a regression-test run on at 
> least one architecture with DEBUG on?
>
> e.g. https://ghc.haskell.org/trac/ghc/ticket/14904
>
> Simon
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


req_interp tests

2018-06-18 Thread Ömer Sinan Ağacan
Hi,

I have a few problems with req_interp tests.

First, req_interp doesn't actually skip the test, it runs it but expects it to
fail. This causes problems when testing stage 1 compiler because 629 req_interp
tests are run for no reason. Ideally I think req_interp would skip the test,
and for the error messages ("not build for interactive use" etc.) we'd have a
few stage 1 tests (maybe we alrady have this). This would make the testsuite
much faster for testing stage 1.

Second, combination of req_interp and compile_fail currently doesn't work,
because req_interp makes a failing test pass, but compile_fail expects the test
to fail. See T3953 as an example. Making req_interp skip the test fixes this
problem as well.

So I'd like to make req_interp skip the test instead of expecting it to fail.
Any objections to this?

Ömer
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Use NULL instead of END_X_QUEUE closures?

2018-05-07 Thread Ömer Sinan Ağacan
Currently we sometimes use special closures to mark end of lists of different
objects. Some examples:

- END_TSO_QUEUE
- END_STM_WATCH_QUEUE
- END_STM_CHUNK_LIST

But we also use NULL for the same thing, e.g. in weak pointer lists
(old_weak_ptr_list, weak_ptr_list).

I'm wondering why we need special marker objects (which are actual closures
with info tables) instead of using NULL consistently. Current approach causes a
minor problem when working on the RTS because every time I traverse a list I
need to remember how the list is terminated (e.g. NULL when traversing weak
pointer lists, END_TSO_QUEUE when traversing TSO lists).

Ömer
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Scavenging SRTs in scavenge_one

2018-05-02 Thread Ömer Sinan Ağacan
Thanks Simon, this is really helpful.

> If you look at scavenge_fun_srt() and co, you'll see that they return
> immediately if !major_gc.

Thanks for pointing this out -- I didn't realize it's returning early when
!major_gc and this caused a lot of confusion. Now everything makes sense.

I'll add a note for scavenging SRTs and refer to it in relevant code and submit
a diff.

Ömer

2018-05-01 22:10 GMT+03:00 Simon Marlow <marlo...@gmail.com>:
> Your explanation is basically right. scavenge_one() is only used for a
> non-major collection, where we aren't traversing SRTs. Admittedly this is a
> subtle point that could almost certainly be documented better, I probably
> just overlooked it.
>
> More inline:
>
> On 1 May 2018 at 10:26, Ömer Sinan Ağacan <omeraga...@gmail.com> wrote:
>>
>> I have an idea but it doesn't explain everything;
>>
>> SRTs are used to collect CAFs, and CAFs are always added to the oldest
>> generation's mut_list when allocated [1].
>>
>> When we're scavenging a mut_list we know we're not doing a major GC, and
>> because mut_list of oldest generation has all the newly allocated CAFs,
>> which
>> will be scavenged anyway, no need to scavenge SRTs for those.
>>
>> Also, static objects are always evacuated to the oldest gen [2], so any
>> CAFs
>> that are alive but not in the mut_list of the oldest gen will stay alive
>> after
>> a non-major GC, again no need to scavenge SRTs to keep these alive.
>>
>> This also explains why it's OK to not collect static objects (and not
>> treat
>> them as roots) in non-major GCs.
>>
>> However this doesn't explain
>>
>> - Why it's OK to scavenge large objects with scavenge_one().
>
>
> I don't understand - perhaps you could elaborate on why you think it might
> not be OK? Large objects are treated exactly the same as small objects with
> respect to their lifetimes.
>
>>
>> - Why we scavenge SRTs in non-major collections in other places (e.g.
>>   scavenge_block()).
>
>
> If you look at scavenge_fun_srt() and co, you'll see that they return
> immediately if !major_gc.
>
>>
>> Simon, could you say a few words about this?
>
>
> Was that enough words? I have more if necessary :)
>
> Cheers
> Simon
>
>
>>
>>
>> [1]: https://github.com/ghc/ghc/blob/master/rts/sm/Storage.c#L445-L449
>> [2]: https://github.com/ghc/ghc/blob/master/rts/sm/Scav.c#L1761-L1763
>>
>> Ömer
>>
>> 2018-03-28 17:49 GMT+03:00 Ben Gamari <b...@well-typed.com>:
>> > Hi Simon,
>> >
>> > I'm a bit confused by scavenge_one; namely it doesn't scavenge SRTs. It
>> > appears that it is primarily used for remembered set entries but it's
>> > not at all clear why this means that we can safely ignore SRTs (e.g. in
>> > the FUN and THUNK cases).
>> >
>> > Can you shed some light on this?
>> >
>> > Cheers,
>> >
>> > - Ben
>> >
>> > ___
>> > ghc-devs mailing list
>> > ghc-devs@haskell.org
>> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>> >
>> ___
>> ghc-devs mailing list
>> ghc-devs@haskell.org
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Scavenging SRTs in scavenge_one

2018-05-01 Thread Ömer Sinan Ağacan
I have an idea but it doesn't explain everything;

SRTs are used to collect CAFs, and CAFs are always added to the oldest
generation's mut_list when allocated [1].

When we're scavenging a mut_list we know we're not doing a major GC, and
because mut_list of oldest generation has all the newly allocated CAFs, which
will be scavenged anyway, no need to scavenge SRTs for those.

Also, static objects are always evacuated to the oldest gen [2], so any CAFs
that are alive but not in the mut_list of the oldest gen will stay alive after
a non-major GC, again no need to scavenge SRTs to keep these alive.

This also explains why it's OK to not collect static objects (and not treat
them as roots) in non-major GCs.

However this doesn't explain

- Why it's OK to scavenge large objects with scavenge_one().

- Why we scavenge SRTs in non-major collections in other places (e.g.
  scavenge_block()).

Simon, could you say a few words about this?

[1]: https://github.com/ghc/ghc/blob/master/rts/sm/Storage.c#L445-L449
[2]: https://github.com/ghc/ghc/blob/master/rts/sm/Scav.c#L1761-L1763

Ömer

2018-03-28 17:49 GMT+03:00 Ben Gamari :
> Hi Simon,
>
> I'm a bit confused by scavenge_one; namely it doesn't scavenge SRTs. It
> appears that it is primarily used for remembered set entries but it's
> not at all clear why this means that we can safely ignore SRTs (e.g. in
> the FUN and THUNK cases).
>
> Can you shed some light on this?
>
> Cheers,
>
> - Ben
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: ZuriHac 2018 GHC DevOps track - Request for Contributions

2018-04-08 Thread Ömer Sinan Ağacan
Hi,

I'd also be happy to help. At the very least I can be around as a mentor, but
if I can find a suitable hask I may also host a hacking session.

Ömer

2018-04-08 16:01 GMT+03:00 Michal Terepeta :
> On Sat, Apr 7, 2018 at 3:34 PM Niklas Hambüchen  wrote:
>>
>> Hi GHC devs,
>>
>> The ZuriHac 2018 conference will feature a GHC DevOps track (which
>> Andreas and I are coordinating), that will be all about fostering
>> contributions to GHC and learning to hack it. There will be a room or
>> two allocated at Zurihac for this purpose.
>> [...]
>> Please contact Andreas or me (on this list or privately) if you think
>> you could help in any of these directions!
>> If you're not sure, contact us anyway and tell us your idea!
>>
>> Best,
>> Niklas and Andreas
>> ZuriHac 2018 GHC DevOps track coordinators
>
>
> Hi Niklas, Andreas,
>
> I'd be happy to help. :) I know a bit about the backend (e.g., cmm level),
> but it might be tricky to find there some smaller/self-contained projects
> that would fit ZuriHac.
> You've mentioned performance regression tests - maybe we could also work on
> improving nofib?
>
> - Michal
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


New slow validate errors

2018-04-08 Thread Ömer Sinan Ağacan
Hi,

I see a lot of these errors in slow validate using current GHC HEAD:

ghc: panic! (the 'impossible' happened)
  (GHC version 8.5.20180407 for x86_64-unknown-linux):
Each block should be reachable from only one ProcPoint

This wasn't happening ~10 days ago. I suspect it may be D4417 but I haven't
checked.

Ömer
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Why does the RTS run GC right before shutting down?

2018-04-06 Thread Ömer Sinan Ağacan
We already run all finalizers after exitScheduler:

https://github.com/ghc/ghc/blob/master/rts/RtsStartup.c#L382-L388

so no need to run GC for that.

Ömer

2018-04-06 17:06 GMT+03:00 Edward Z. Yang :
> I believe it's so that we can run finalizers before shutdown.
>
> Excerpts from Ömer Sinan Ağacan's message of 2018-04-06 16:49:41 +0300:
>> Hi,
>>
>> I'm wondering why we run GC in this line:
>>
>> https://github.com/ghc/ghc/blob/master/rts/Schedule.c#L2670
>>
>> I went back in commit history using git blame and found the commit that
>> introduced that line (5638488ba28), but it didn't help. Does anyone know why 
>> we
>> need that line?
>>
>> Thanks,
>>
>> Ömer
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Why does the RTS run GC right before shutting down?

2018-04-06 Thread Ömer Sinan Ağacan
Hi,

I'm wondering why we run GC in this line:

https://github.com/ghc/ghc/blob/master/rts/Schedule.c#L2670

I went back in commit history using git blame and found the commit that
introduced that line (5638488ba28), but it didn't help. Does anyone know why we
need that line?

Thanks,

Ömer
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: 8.5 build failure

2018-04-03 Thread Ömer Sinan Ağacan
Does the error go away if you restart the build without cleaning? I had the
same error on my nightly builder, but it worked when I restarted the build.

Ömer

2018-04-03 18:06 GMT+03:00 John Leo :
> Hi everyone,
>
> I pulled from head this morning and rebased my current work on it, and am
> getting a build error I've never seen before. I don't think it's due to any
> of my own changes, and everything built fine last time I tried just a couple
> days ago . I'd pulled both code and submodules. I did make clean, ./boot,
> ./configure, and then make. The last few lines of the output are below. This
> is on a Mac using GHC 8.2.2. Let me know if you need any more info.
>
> John
>
> "inplace/bin/ghc-stage1" -this-unit-id rts -shared -dynamic -dynload deploy
> -no-auto-link-packages -Lrts/dist/build -lffi -optl-Wl,-rpath
> -optl-Wl,@loader_path `cat rts/dist/libs.depend`
> rts/dist/build/Adjustor.thr_debug_dyn_o rts/dist/build/Arena.thr_debug_dyn_o
> rts/dist/build/Capability.thr_debug_dyn_o
> rts/dist/build/CheckUnload.thr_debug_dyn_o
> rts/dist/build/ClosureFlags.thr_debug_dyn_o
> rts/dist/build/Disassembler.thr_debug_dyn_o
> rts/dist/build/FileLock.thr_debug_dyn_o
> rts/dist/build/Globals.thr_debug_dyn_o rts/dist/build/Hash.thr_debug_dyn_o
> rts/dist/build/Hpc.thr_debug_dyn_o rts/dist/build/HsFFI.thr_debug_dyn_o
> rts/dist/build/Inlines.thr_debug_dyn_o
> rts/dist/build/Interpreter.thr_debug_dyn_o
> rts/dist/build/LdvProfile.thr_debug_dyn_o
> rts/dist/build/Libdw.thr_debug_dyn_o
> rts/dist/build/LibdwPool.thr_debug_dyn_o
> rts/dist/build/Linker.thr_debug_dyn_o
> rts/dist/build/Messages.thr_debug_dyn_o
> rts/dist/build/OldARMAtomic.thr_debug_dyn_o
> rts/dist/build/PathUtils.thr_debug_dyn_o rts/dist/build/Pool.thr_debug_dyn_o
> rts/dist/build/Printer.thr_debug_dyn_o
> rts/dist/build/ProfHeap.thr_debug_dyn_o
> rts/dist/build/ProfilerReport.thr_debug_dyn_o
> rts/dist/build/ProfilerReportJson.thr_debug_dyn_o
> rts/dist/build/Profiling.thr_debug_dyn_o
> rts/dist/build/Proftimer.thr_debug_dyn_o
> rts/dist/build/RaiseAsync.thr_debug_dyn_o
> rts/dist/build/RetainerProfile.thr_debug_dyn_o
> rts/dist/build/RetainerSet.thr_debug_dyn_o
> rts/dist/build/RtsAPI.thr_debug_dyn_o
> rts/dist/build/RtsDllMain.thr_debug_dyn_o
> rts/dist/build/RtsFlags.thr_debug_dyn_o
> rts/dist/build/RtsMain.thr_debug_dyn_o
> rts/dist/build/RtsMessages.thr_debug_dyn_o
> rts/dist/build/RtsStartup.thr_debug_dyn_o
> rts/dist/build/RtsSymbolInfo.thr_debug_dyn_o
> rts/dist/build/RtsSymbols.thr_debug_dyn_o
> rts/dist/build/RtsUtils.thr_debug_dyn_o rts/dist/build/STM.thr_debug_dyn_o
> rts/dist/build/Schedule.thr_debug_dyn_o
> rts/dist/build/Sparks.thr_debug_dyn_o rts/dist/build/Stable.thr_debug_dyn_o
> rts/dist/build/StaticPtrTable.thr_debug_dyn_o
> rts/dist/build/Stats.thr_debug_dyn_o rts/dist/build/StgCRun.thr_debug_dyn_o
> rts/dist/build/StgPrimFloat.thr_debug_dyn_o
> rts/dist/build/Task.thr_debug_dyn_o
> rts/dist/build/ThreadLabels.thr_debug_dyn_o
> rts/dist/build/ThreadPaused.thr_debug_dyn_o
> rts/dist/build/Threads.thr_debug_dyn_o rts/dist/build/Ticky.thr_debug_dyn_o
> rts/dist/build/Timer.thr_debug_dyn_o
> rts/dist/build/TopHandler.thr_debug_dyn_o
> rts/dist/build/Trace.thr_debug_dyn_o rts/dist/build/WSDeque.thr_debug_dyn_o
> rts/dist/build/Weak.thr_debug_dyn_o rts/dist/build/fs.thr_debug_dyn_o
> rts/dist/build/xxhash.thr_debug_dyn_o
> rts/dist/build/hooks/FlagDefaults.thr_debug_dyn_o
> rts/dist/build/hooks/LongGCSync.thr_debug_dyn_o
> rts/dist/build/hooks/MallocFail.thr_debug_dyn_o
> rts/dist/build/hooks/OnExit.thr_debug_dyn_o
> rts/dist/build/hooks/OutOfHeap.thr_debug_dyn_o
> rts/dist/build/hooks/StackOverflow.thr_debug_dyn_o
> rts/dist/build/sm/BlockAlloc.thr_debug_dyn_o
> rts/dist/build/sm/CNF.thr_debug_dyn_o
> rts/dist/build/sm/Compact.thr_debug_dyn_o
> rts/dist/build/sm/Evac.thr_debug_dyn_o
> rts/dist/build/sm/Evac_thr.thr_debug_dyn_o
> rts/dist/build/sm/GC.thr_debug_dyn_o rts/dist/build/sm/GCAux.thr_debug_dyn_o
> rts/dist/build/sm/GCUtils.thr_debug_dyn_o
> rts/dist/build/sm/MBlock.thr_debug_dyn_o
> rts/dist/build/sm/MarkWeak.thr_debug_dyn_o
> rts/dist/build/sm/Sanity.thr_debug_dyn_o
> rts/dist/build/sm/Scav.thr_debug_dyn_o
> rts/dist/build/sm/Scav_thr.thr_debug_dyn_o
> rts/dist/build/sm/Storage.thr_debug_dyn_o
> rts/dist/build/sm/Sweep.thr_debug_dyn_o
> rts/dist/build/eventlog/EventLog.thr_debug_dyn_o
> rts/dist/build/eventlog/EventLogWriter.thr_debug_dyn_o
> rts/dist/build/linker/CacheFlush.thr_debug_dyn_o
> rts/dist/build/linker/Elf.thr_debug_dyn_o
> rts/dist/build/linker/LoadArchive.thr_debug_dyn_o
> rts/dist/build/linker/M32Alloc.thr_debug_dyn_o
> rts/dist/build/linker/MachO.thr_debug_dyn_o
> rts/dist/build/linker/PEi386.thr_debug_dyn_o
> rts/dist/build/linker/SymbolExtras.thr_debug_dyn_o
> rts/dist/build/linker/elf_got.thr_debug_dyn_o
> rts/dist/build/linker/elf_plt.thr_debug_dyn_o
> rts/dist/build/linker/elf_plt_aarch64.thr_debug_dyn_o
> rts/dist/build/linker/elf_plt_arm.thr_debug_dyn_o
> 

Re: Phabricator new behavior regarding submitting patches for reviews

2018-03-30 Thread Ömer Sinan Ağacan
> I assume you worked this out? I think you can just "request review" in
> the actions menu at the bottom of the page.

This seems to work, although it's still one extra step compared to the
previous version.

Ömer

2018-03-30 21:04 GMT+03:00 Ben Gamari <b...@well-typed.com>:
> Ömer Sinan Ağacan <omeraga...@gmail.com> writes:
>
>> Thanks Ben. Is there anything I can do about the existing tickets stuck in
>> "draft" state?
>>
> I assume you worked this out? I think you can just "request review" in
> the actions menu at the bottom of the page.
>
> Cheers,
>
> - Ben
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Phabricator new behavior regarding submitting patches for reviews

2018-03-30 Thread Ömer Sinan Ağacan
Thanks Ben. Is there anything I can do about the existing tickets stuck in
"draft" state?

Ömer

2018-03-30 17:41 GMT+03:00 Ben Gamari <b...@well-typed.com>:
> Ömer Sinan Ağacan <omeraga...@gmail.com> writes:
>
>> Hi,
>>
>> One of the changes with the recent Phabricator update is that we can no 
>> longer
>> submit a patch for reviews until the build bot successfully builds it. I 
>> can't
>> even ping people in the comment section until the patch builds. It says:
>>
>>> These changes have not finished building yet and may have build failures. 
>>> This
>>> revision is currently a draft. You can leave comments, but no one will be
>>> notified until the revision is submitted for review.
>>
>> The "submit" button now says "submit quietly".
>>
>> This is really annoying because
>>
>> - It takes several days for build bot to build a patch (I have a patch that 
>> has
>>   been in the queue for 3 days now and it's still counting)
>>
>> - I can validate a patch on my laptop in an hour. (slow validate takes about 
>> 2-3
>>   hours) Previously I could get approvals, and then test locally and push. 
>> Now I
>>   can't do that unless I email people about the patch.
>>
>> - In the previous version I could submit an incomplete patch for comments, 
>> now I
>>   can't do that because no one will be notified and there's no way to ping
>>   people to explicitly draw attention (again unless I email people).
>>
>> So if possible (without downgrading it) could we bring back the old behavior
>> (perhaps there's a setting about this?)
>>
> Indeed, I am also concerned that the new behavior is going to slow down
> the review process too much. Unfortunately, there is currently no
> support to revert to the old notification behavior nor does Phacility
> seem to have any plan to add support [1].
>
> They do, however, mention a potential workaround which I have applied to
> our installation. I believe differentials should now behave as they did
> previously.
>
> Cheers,
>
> - Ben
>
>
> [1] https://secure.phabricator.com/T2543
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Question about indirectees of BLACKHOLE closures

2018-03-29 Thread Ömer Sinan Ağacan
I still don't understand the whole story with blackholes but I'll
update the comments around the BLACKHOLE stack frame and/or wiki pages
once I get a better understanding.

Ömer


2018-03-26 21:47 GMT+03:00 Ben Gamari :
> Simon Marlow  writes:
>
>> The raise closure is declared to be a THUNK:
>>
>> https://phabricator.haskell.org/diffusion/GHC/browse/master/rts/Exception.cmm;60e29dc2611f5c1a01cfd9a870841927847a7b74$424
>>
>> Another example of this is when an asynchronous exception is thrown, and we
>> update all the thunks/BLACKHOLEs pointed to by the update frames to point
>> to new thunks (actually AP_STACK closures) representing the frozen state of
>> evaluation of those thunks.  For this, see rts/RaiseAsync.c.
>>
> This thread has answered a number of interesting questions. It would be
> a shame if these answers vanished into the abyss of the ghc-devs
> archives.
>
> Omer, do you think you could make sure that the discussion here is
> summarized in a Note (or ensure that the relevant notes reference one
> another, if they already exist)?
>
> Cheers,
>
>  - Ben
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Curious demand in a function parameter

2018-03-25 Thread Ömer Sinan Ağacan
Hi,

In this program

{-# LANGUAGE MagicHash #-}

module Lib where

import Control.Exception
import GHC.Exts
import GHC.IO

data Err = Err
  deriving (Show)
instance Exception Err

f :: Int -> Int -> IO Int
f x y | x > 0 = IO (raiseIO# (toException Err))
  | y > 0 = return 1
  | otherwise = return 2

when I compile this with 8.4 -O2 I get a strict demand on `y`:

f :: Int -> Int -> IO Int
[GblId,
 Arity=3,
 Str=,
 ...]

but clearly `y` is not used on all code paths, so I don't understand why we
have a strict demand here.

I found this example in the comments around `raiseIO#`:

-- raiseIO# needs to be a primop, because exceptions in the IO monad
-- must be *precise* - we don't want the strictness analyser turning
-- one kind of bottom into another, as it is allowed to do in pure code.
--
-- But we *do* want to know that it returns bottom after
-- being applied to two arguments, so that this function is strict in y
-- f x y | x>0   = raiseIO blah
--   | y>0   = return 1
--   | otherwise = return 2

However it doesn't explain why we want be strict on `y`.

Interestingly, when I try to make GHC generate a worker and a wrapper for this
function to make the program fail by evaluating `y` eagerly I somehow got a
lazy demand on `y`:

{-# LANGUAGE MagicHash #-}

module Main where

import Control.Exception
import GHC.Exts
import GHC.IO

data Err = Err
  deriving (Show)
instance Exception Err

f :: Int -> Int -> IO Int
f x y | x > 0 = IO (raiseIO# (toException Err))
  | y > 0 = f x (y - 1)
  | otherwise = f (x - 1) y

main = f 1 undefined

I was thinking that this program should fail with "undefined" instead of "Err",
but the demand I got for `f` became:

f :: Int -> Int -> IO Int
[GblId,
 Arity=2,
 Str=,
 ...]

which makes sense to me. But I don't understand how my changes can change `y`s
demand, and why the original demand is strict rather than lazy. Could anyone
give me some pointers?

Thanks

Ömer
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Question about indirectees of BLACKHOLE closures

2018-03-23 Thread Ömer Sinan Ağacan
Thanks Simon, that's really helpful.

A few more questions:

As far as I understand the difference between

- BLACKHOLE pointing to a TSO
- BLACKHOLE pointing to a BLOCKING_QUEUE

is that in the former we don't yet have any threads blocked by the BLACKHOLE
whereas in the latter we have and the blocking queue holds all those blocked
threads. Did I get this right?

Secondly, can a BLACKHOLE point to a THUNK? I'd expect no, because we BLACKHOLE
a closure when we're done evaluating it (assuming no eager blackholing), and
evaluation usually happens up to WHNF.

Thanks,

Ömer

2018-03-20 18:27 GMT+03:00 Simon Marlow <marlo...@gmail.com>:
> Added comments: https://phabricator.haskell.org/D4517
>
> On 20 March 2018 at 14:58, Simon Marlow <marlo...@gmail.com> wrote:
>>
>> Hi Omer,
>>
>> On 20 March 2018 at 13:05, Ömer Sinan Ağacan <omeraga...@gmail.com> wrote:
>>>
>>> Hi,
>>>
>>> I've been looking at BLACKHOLE closures and how the indirectee field is
>>> used
>>> and I have a few questions:
>>>
>>> Looking at evacuate for BLACKHOLE closures:
>>>
>>> case BLACKHOLE:
>>> {
>>> StgClosure *r;
>>> const StgInfoTable *i;
>>> r = ((StgInd*)q)->indirectee;
>>> if (GET_CLOSURE_TAG(r) == 0) {
>>> i = r->header.info;
>>> if (IS_FORWARDING_PTR(i)) {
>>> r = (StgClosure *)UN_FORWARDING_PTR(i);
>>> i = r->header.info;
>>> }
>>> if (i == _TSO_info
>>> || i == _WHITEHOLE_info
>>> || i == _BLOCKING_QUEUE_CLEAN_info
>>> || i == _BLOCKING_QUEUE_DIRTY_info) {
>>> copy(p,info,q,sizeofW(StgInd),gen_no);
>>> return;
>>> }
>>> ASSERT(i != _IND_info);
>>> }
>>> q = r;
>>> *p = r;
>>> goto loop;
>>> }
>>>
>>> It seems like indirectee can be a TSO, WHITEHOLE, BLOCKING_QUEUE_CLEAN,
>>> BLOCKING_QUEUE_DIRTY, and it can't be IND. I'm wondering what does it
>>> mean for
>>> a BLACKHOLE to point to a
>>>
>>> - TSO
>>> - WHITEHOLE
>>> - BLOCKING_QUEUE_CLEAN
>>> - BLOCKING_QUEUE_DIRTY
>>
>>
>> That sounds right to me.
>>
>>>
>>> Is this documented somewhere or otherwise could someone give a few
>>> pointers on
>>> where to look in the code?
>>
>>
>> Unfortunately I don't think we have good documentation for this, but you
>> should look at the comments around messageBlackHole in Messages.c.
>>
>>>
>>> Secondly, I also looked at the BLACKHOLE entry code, and it seems like it
>>> has a
>>> different assumption about what can indirectee field point to:
>>>
>>> INFO_TABLE(stg_BLACKHOLE,1,0,BLACKHOLE,"BLACKHOLE","BLACKHOLE")
>>> (P_ node)
>>> {
>>> W_ r, info, owner, bd;
>>> P_ p, bq, msg;
>>>
>>> TICK_ENT_DYN_IND(); /* tick */
>>>
>>> retry:
>>> p = StgInd_indirectee(node);
>>> if (GETTAG(p) != 0) {
>>> return (p);
>>> }
>>>
>>> info = StgHeader_info(p);
>>> if (info == stg_IND_info) {
>>> // This could happen, if e.g. we got a BLOCKING_QUEUE that
>>> has
>>> // just been replaced with an IND by another thread in
>>> // wakeBlockingQueue().
>>> goto retry;
>>> }
>>>
>>> if (info == stg_TSO_info ||
>>> info == stg_BLOCKING_QUEUE_CLEAN_info ||
>>> info == stg_BLOCKING_QUEUE_DIRTY_info)
>>> {
>>> ("ptr" msg) = ccall allocate(MyCapability() "ptr",
>>>
>>> BYTES_TO_WDS(SIZEOF_MessageBlackHole));
>>>
>>> SET_HDR(msg, stg_MSG_BLACKHOLE_info, CCS_SYSTEM);
>>> MessageBlackHole_tso(msg) = CurrentTSO;
>>> MessageBlackHole_bh(msg) = node;
>>>
>>> (r) = ccall messageBlackHole(MyCapability() "ptr", msg
>>> "ptr");
>>>
>>> if (r == 0) {
>>> goto retry;
>>> } else {
>>> StgTSO_why_blocked(CurrentTSO) = Block

Re: What does "return" keyword mean in INFO_TABLE_RET declarations?

2018-03-20 Thread Ömer Sinan Ağacan
I think this may be my bad. Both StgMiscClosures.cmm and Updates.cmm have this
line in the header:

This file is written in a subset of C--, extended with various
features specific to GHC.  It is compiled by GHC directly.  For the
syntax of .cmm files, see the parser in ghc/compiler/cmm/CmmParse.y.

and CmmParse.y explains INFO_TABLE_RET:

Stack Frames


A stack frame is written like this:

INFO_TABLE_RET ( label, FRAME_TYPE, info_ptr, field1, ..., fieldN )
   return ( arg1, ..., argM )
{
  ... code ...
}

where field1 ... fieldN are the fields of the stack frame (with types)
arg1...argN are the values returned to the stack frame (with types).
The return values are assumed to be passed according to the
NativeReturn convention.

...

It's just that sometimes it's not easy to find your way in a 880kloc code base.

Sorry for the noise,

Ömer

2018-03-20 12:57 GMT+03:00 Simon Peyton Jones via ghc-devs
<ghc-devs@haskell.org>:
> It’s fine where it is, provided it takes the form of
>
> Note [Stack frames]
>
> and that Note is referred to from relevant places elsewhere.  E.g. Omer
> didn’t find it.   One plausible place to point to it is the very definition
> site of INFO_TABLE_RET, wherever that is.
>
>
>
> Simon
>
>
>
> From: ghc-devs <ghc-devs-boun...@haskell.org> On Behalf Of Simon Marlow
> Sent: 19 March 2018 18:50
> To: Rahul Muttineni <rahulm...@gmail.com>
> Cc: ghc-devs <ghc-devs@haskell.org>
> Subject: Re: What does "return" keyword mean in INFO_TABLE_RET declarations?
>
>
>
> On 19 March 2018 at 00:53, Rahul Muttineni <rahulm...@gmail.com> wrote:
>
> Hi Omer,
>
>
>
> An INFO_TABLE_RET is a frame that "can be returned to" and the return
> keyword allows you to provide a name for the value(s) that was(were)
> returned to this frame and do something with it if you wish. If you didn't
> have this keyword, you would have to do low-level stack manipulations
> yourself to get a handle on the return value and it's easy to mess up.
>
>
>
> You can think of INFO_TABLE_RET as a traditional stack frame in languages
> like C, except it's powerful because you can specify custom logic on how you
> deal with the returned value. In some cases, like stg_atomically_frame, you
> may not even return the value further down into the stack until certain
> conditions are met (the transaction is valid).
>
>
>
> This is correct.  The "documentation" for this is in the CmmParse.y module:
> https://phabricator.haskell.org/diffusion/GHC/browse/master/compiler/cmm/CmmParse.y;b3b394b44e42f19ab7c23668a4008e4f728b51ba$151-165
>
> It wouldn't hurt to move all that to the wiki and leave a link behind, if
> anyone wants to do that.
>
> Cheers
>
> Simon
>
>
>
>
>
> Hope that helps,
>
> Rahul
>
>
>
> On Sun, Mar 18, 2018 at 8:18 PM, Ömer Sinan Ağacan <omeraga...@gmail.com>
> wrote:
>
> Hi,
>
> I'm trying to understand what a "return" list in INFO_TABLE_RET declaration
> line specifies. As far as I understand a "return" in the declaration line is
> something different than a "return" in the body. For example, in this
> definition: (in HeapStackCheck.cmm)
>
> INFO_TABLE_RET ( stg_ret_p, RET_SMALL, W_ info_ptr, P_ ptr )
> return (/* no return values */)
> {
> return (ptr);
> }
>
> The return list is empty and it even says "no return values" explicitly, yet
> it
> returns something.
>
> My guess is that the "return" list in the header is actually for arguments.
> I
> found this info table which has an argument: (in StgMiscClosures.cmm)
>
> INFO_TABLE_RET (stg_restore_cccs_eval, RET_SMALL, W_ info_ptr, W_ cccs)
> return (P_ ret)
> {
> unwind Sp = Sp + WDS(2);
> #if defined(PROFILING)
> CCCS = cccs;
> #endif
> jump stg_ap_0_fast(ret);
> }
>
> This is the use site: (in Interpreter.c)
>
> #if defined(PROFILING)
> // restore the CCCS after evaluating the closure
> Sp_subW(2);
> SpW(1) = (W_)cap->r.rCCCS;
> SpW(0) = (W_)_restore_cccs_eval_info;
> #endif
> Sp_subW(2);
> SpW(1) = (W_)tagged_obj;
> SpW(0) = (W_)_enter_info;
> RETURN_TO_SCHEDULER_NO_PAUSE(ThreadRunGHC, ThreadYielding);
>
> If I understand this correctly, the "tagged_obj" code will put the return
> value
> in R1, pop the stack (which will have stg_restore_ccs_eval_info at the
> bottom)
> and jump to this the info table code shown above. So `P_ ret` is the value
> 

Question about indirectees of BLACKHOLE closures

2018-03-20 Thread Ömer Sinan Ağacan
Hi,

I've been looking at BLACKHOLE closures and how the indirectee field is used
and I have a few questions:

Looking at evacuate for BLACKHOLE closures:

case BLACKHOLE:
{
StgClosure *r;
const StgInfoTable *i;
r = ((StgInd*)q)->indirectee;
if (GET_CLOSURE_TAG(r) == 0) {
i = r->header.info;
if (IS_FORWARDING_PTR(i)) {
r = (StgClosure *)UN_FORWARDING_PTR(i);
i = r->header.info;
}
if (i == _TSO_info
|| i == _WHITEHOLE_info
|| i == _BLOCKING_QUEUE_CLEAN_info
|| i == _BLOCKING_QUEUE_DIRTY_info) {
copy(p,info,q,sizeofW(StgInd),gen_no);
return;
}
ASSERT(i != _IND_info);
}
q = r;
*p = r;
goto loop;
}

It seems like indirectee can be a TSO, WHITEHOLE, BLOCKING_QUEUE_CLEAN,
BLOCKING_QUEUE_DIRTY, and it can't be IND. I'm wondering what does it mean for
a BLACKHOLE to point to a

- TSO
- WHITEHOLE
- BLOCKING_QUEUE_CLEAN
- BLOCKING_QUEUE_DIRTY

Is this documented somewhere or otherwise could someone give a few pointers on
where to look in the code?

Secondly, I also looked at the BLACKHOLE entry code, and it seems like it has a
different assumption about what can indirectee field point to:

INFO_TABLE(stg_BLACKHOLE,1,0,BLACKHOLE,"BLACKHOLE","BLACKHOLE")
(P_ node)
{
W_ r, info, owner, bd;
P_ p, bq, msg;

TICK_ENT_DYN_IND(); /* tick */

retry:
p = StgInd_indirectee(node);
if (GETTAG(p) != 0) {
return (p);
}

info = StgHeader_info(p);
if (info == stg_IND_info) {
// This could happen, if e.g. we got a BLOCKING_QUEUE that has
// just been replaced with an IND by another thread in
// wakeBlockingQueue().
goto retry;
}

if (info == stg_TSO_info ||
info == stg_BLOCKING_QUEUE_CLEAN_info ||
info == stg_BLOCKING_QUEUE_DIRTY_info)
{
("ptr" msg) = ccall allocate(MyCapability() "ptr",
 BYTES_TO_WDS(SIZEOF_MessageBlackHole));

SET_HDR(msg, stg_MSG_BLACKHOLE_info, CCS_SYSTEM);
MessageBlackHole_tso(msg) = CurrentTSO;
MessageBlackHole_bh(msg) = node;

(r) = ccall messageBlackHole(MyCapability() "ptr", msg "ptr");

if (r == 0) {
goto retry;
} else {
StgTSO_why_blocked(CurrentTSO) = BlockedOnBlackHole::I16;
StgTSO_block_info(CurrentTSO) = msg;
jump stg_block_blackhole(node);
}
}
else
{
ENTER(p);
}
}

The difference is, when the tag of indirectee is 0, evacuate assumes that
indirectee can't point to an IND, but BLACKHOLE entry code thinks it's possible
and there's even a comment about why. (I don't understand the comment yet) I'm
wondering if this code is correct, and why. Again any pointers would be
appreciated.

Thanks,

Ömer
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: What does "return" keyword mean in INFO_TABLE_RET declarations?

2018-03-19 Thread Ömer Sinan Ağacan
Hi Rahul,

Thanks, that is really helpful.

So my intuition was correct. I think the naming here is a bit unfortunate
because unless you're already familiar with Cmm, when you see this:

INFO_TABLE_RET ( stg_ret_p, RET_SMALL, W_ info_ptr, P_ ptr )
return (/* no return values */)
{
return (ptr);
}

you will be _very_ confused.

Ömer

2018-03-19 3:53 GMT+03:00 Rahul Muttineni <rahulm...@gmail.com>:
> Hi Omer,
>
> An INFO_TABLE_RET is a frame that "can be returned to" and the return
> keyword allows you to provide a name for the value(s) that was(were)
> returned to this frame and do something with it if you wish. If you didn't
> have this keyword, you would have to do low-level stack manipulations
> yourself to get a handle on the return value and it's easy to mess up.
>
> You can think of INFO_TABLE_RET as a traditional stack frame in languages
> like C, except it's powerful because you can specify custom logic on how you
> deal with the returned value. In some cases, like stg_atomically_frame, you
> may not even return the value further down into the stack until certain
> conditions are met (the transaction is valid).
>
> Hope that helps,
> Rahul
>
> On Sun, Mar 18, 2018 at 8:18 PM, Ömer Sinan Ağacan <omeraga...@gmail.com>
> wrote:
>>
>> Hi,
>>
>> I'm trying to understand what a "return" list in INFO_TABLE_RET
>> declaration
>> line specifies. As far as I understand a "return" in the declaration line
>> is
>> something different than a "return" in the body. For example, in this
>> definition: (in HeapStackCheck.cmm)
>>
>> INFO_TABLE_RET ( stg_ret_p, RET_SMALL, W_ info_ptr, P_ ptr )
>> return (/* no return values */)
>> {
>> return (ptr);
>> }
>>
>> The return list is empty and it even says "no return values" explicitly,
>> yet it
>> returns something.
>>
>> My guess is that the "return" list in the header is actually for
>> arguments. I
>> found this info table which has an argument: (in StgMiscClosures.cmm)
>>
>> INFO_TABLE_RET (stg_restore_cccs_eval, RET_SMALL, W_ info_ptr, W_
>> cccs)
>> return (P_ ret)
>> {
>> unwind Sp = Sp + WDS(2);
>> #if defined(PROFILING)
>> CCCS = cccs;
>> #endif
>> jump stg_ap_0_fast(ret);
>> }
>>
>> This is the use site: (in Interpreter.c)
>>
>> #if defined(PROFILING)
>> // restore the CCCS after evaluating the closure
>> Sp_subW(2);
>> SpW(1) = (W_)cap->r.rCCCS;
>> SpW(0) = (W_)_restore_cccs_eval_info;
>> #endif
>> Sp_subW(2);
>> SpW(1) = (W_)tagged_obj;
>> SpW(0) = (W_)_enter_info;
>> RETURN_TO_SCHEDULER_NO_PAUSE(ThreadRunGHC, ThreadYielding);
>>
>> If I understand this correctly, the "tagged_obj" code will put the return
>> value
>> in R1, pop the stack (which will have stg_restore_ccs_eval_info at the
>> bottom)
>> and jump to this the info table code shown above. So `P_ ret` is the value
>> of
>> `tagged_obj`, and the "return" list is actually for parameters.
>>
>> Did I get this right? If I did, I'm curious why it's called "return" and
>> not
>> "args" or something like that.
>>
>> Thanks,
>>
>> Ömer
>> ___
>> ghc-devs mailing list
>> ghc-devs@haskell.org
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
>
>
>
> --
> Rahul Muttineni
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


What does "return" keyword mean in INFO_TABLE_RET declarations?

2018-03-18 Thread Ömer Sinan Ağacan
Hi,

I'm trying to understand what a "return" list in INFO_TABLE_RET declaration
line specifies. As far as I understand a "return" in the declaration line is
something different than a "return" in the body. For example, in this
definition: (in HeapStackCheck.cmm)

INFO_TABLE_RET ( stg_ret_p, RET_SMALL, W_ info_ptr, P_ ptr )
return (/* no return values */)
{
return (ptr);
}

The return list is empty and it even says "no return values" explicitly, yet it
returns something.

My guess is that the "return" list in the header is actually for arguments. I
found this info table which has an argument: (in StgMiscClosures.cmm)

INFO_TABLE_RET (stg_restore_cccs_eval, RET_SMALL, W_ info_ptr, W_ cccs)
return (P_ ret)
{
unwind Sp = Sp + WDS(2);
#if defined(PROFILING)
CCCS = cccs;
#endif
jump stg_ap_0_fast(ret);
}

This is the use site: (in Interpreter.c)

#if defined(PROFILING)
// restore the CCCS after evaluating the closure
Sp_subW(2);
SpW(1) = (W_)cap->r.rCCCS;
SpW(0) = (W_)_restore_cccs_eval_info;
#endif
Sp_subW(2);
SpW(1) = (W_)tagged_obj;
SpW(0) = (W_)_enter_info;
RETURN_TO_SCHEDULER_NO_PAUSE(ThreadRunGHC, ThreadYielding);

If I understand this correctly, the "tagged_obj" code will put the return value
in R1, pop the stack (which will have stg_restore_ccs_eval_info at the bottom)
and jump to this the info table code shown above. So `P_ ret` is the value of
`tagged_obj`, and the "return" list is actually for parameters.

Did I get this right? If I did, I'm curious why it's called "return" and not
"args" or something like that.

Thanks,

Ömer
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Is "cml_cont" of CmmCall used in practice?

2018-03-18 Thread Ömer Sinan Ağacan
Hi Shao,

Perhaps not in the Cmm output generated for your programs, but it's definitely
used in the code generator. See e.g. `lowerSafeForeignCall` and `blockCode`
which set the field with `Just`. The former seems to be related with foreign
calls so perhaps try compiling a FFI package. `CmmLayoutStack` uses that field
for code generation (I don't understand the details yet).

Ömer

2018-03-18 8:38 GMT+03:00 Shao, Cheng :
> Hi all,
>
> Is the "cml_cont" field of the CmmCall variant is really used in practice? I
> traversed the output of raw Cmm produced by ghc compiling the whole base
> package, but the value of cml_cont is always Nothing.
>
> Regards,
> Shao Cheng
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: A (late-)demand analysis and w/w question

2018-02-20 Thread Ömer Sinan Ağacan
Thanks. I checked both papers, they mention that not all reboxing is
eliminated, but as far as I can see they don't give an example reboxing that's
not eliminated. It's not hard to come up with an example though. In this
function

fac :: Int -> Int
fac 0 = 1
fac n = n * fac (n - 1)

before simplification the worker looks like this

Rec {
-- RHS size: {terms: 29, types: 10, coercions: 0, joins: 0/2}
$wfac__s28Z
$wfac__s28Z
  = \ ww_s28U ->
  let {
w_s28R
w_s28R = I# ww_s28U } in
  case let {
 n_aU7
 n_aU7 = w_s28R } in
   case check n_aU7 of {
 False ->
   case n_aU7 of { I# x_a27t ->
   case fac_ (I# (-# x_a27t 1#)) of { I# y_a27x ->
   I# (*# x_a27t y_a27x)
   }
   };
 True -> n_aU7
   }
  of ww_s28X
  { I# ww_s28Y ->
  ww_s28Y
  }

`w_s28R` reboxes, but that's easily eliminated by the simplifier. In this
example:

{-# NOINLINE check #-}
check :: Int -> Bool
check !n = True

fac_ :: Int -> Int
fac_ n = if check n then n else n * fac_ (n - 1)

even after simplifications we rebox the argument:

Rec {
-- RHS size: {terms: 17, types: 3, coercions: 0, joins: 0/0}
$wfac_
$wfac_
  = \ ww_s28U ->
  case check (I# ww_s28U) of {
False ->
  case $wfac_ (-# ww_s28U 1#) of ww1_s28Y { __DEFAULT ->
  *# ww_s28U ww1_s28Y
  };
True -> ww_s28U
  }
end Rec }

This seems like a limitation of current demand analyser. I'm going to update
the ticket and put it on hold for now.

Ömer

2018-02-21 1:47 GMT+03:00 Simon Peyton Jones :
> It's called "reboxing" and is referred to in all the strictness analysis 
> papers about GHC.  I don't know a reliable way to get rid of it; but I have 
> it paged out at the moment.
>
> Eg 
> https://www.microsoft.com/en-us/research/publication/theory-practice-demand-analysis-haskell/
> https://www.microsoft.com/en-us/research/publication/demand-analysis/ (the 
> box-demad stuff in the appendix is not implemented in GHC)
>
> Simon
>
>
> | -Original Message-
> | From: ghc-devs [mailto:ghc-devs-boun...@haskell.org] On Behalf Of Ömer
> | Sinan Agacan
> | Sent: 20 February 2018 16:25
> | To: ghc-devs 
> | Subject: A (late-)demand analysis and w/w question
> |
> | Hi,
> |
> | I was recently looking at #6087. One of the cases that increased
> | allocations (see comment:27) is when we do worker/wrapper to pass an
> | `Int#` instead of `Int` when we need the boxed form in the function body.
> | This causes redundant allocations because we already have the boxed
> | version of the value but we passed it unboxed as a result of
> | worker/wrapper.
> |
> | This raises the obvious (but maybe naive?) question of whether we could
> | improve the demand analysis and/or worker/wrapper to avoid unpacking
> | arguments when the argument is boxed again somewhere in the function
> | body.
> |
> | Does this make sense? Has anyone tried this before?
> |
> | Thanks,
> |
> | Ömer
> | ___
> | ghc-devs mailing list
> | ghc-devs@haskell.org
> | https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.hask
> | ell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc-
> | devs=04%7C01%7Csimonpj%40microsoft.com%7C6adfeaddd9964adcba3208d5787
> | eb1b1%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636547407938408171%7CU
> | nknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwi
> | fQ%3D%3D%7C-
> | 1=XQ7xTxQepBeyi%2FDSHMmyXD0H8xFkh%2FoawqiIJJCUBYk%3D=0
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


A (late-)demand analysis and w/w question

2018-02-20 Thread Ömer Sinan Ağacan
Hi,

I was recently looking at #6087. One of the cases that increased
allocations (see comment:27) is when we do worker/wrapper to pass an
`Int#` instead of `Int` when we need the boxed form in the function
body. This causes redundant allocations because we already have the
boxed version of the value but we passed it unboxed as a result of
worker/wrapper.

This raises the obvious (but maybe naive?) question of whether we could
improve the demand analysis and/or worker/wrapper to avoid unpacking
arguments when the argument is boxed again somewhere in the function
body.

Does this make sense? Has anyone tried this before?

Thanks,

Ömer
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: StgLint worth maintaining?

2018-02-10 Thread Ömer Sinan Ağacan
Created #14787 as tracking ticket. Patch is at D4404.

Ömer

2018-02-09 12:22 GMT+03:00 Simon Peyton Jones :
> Good summary!  I suggest that you open a ticket with this email as the 
> Description.  Then we can point to it later.
>
> I agree that there is little point in flogging a dead horse.  But there are 
> /some/ invariants, so I vote for
> |  2. Rewrite it to only check these two and nothing else, enable it in
> | validate (and in other build flavours that enable CoreLint).
> |
>
> Simon
>
> |  -Original Message-
> |  From: ghc-devs [mailto:ghc-devs-boun...@haskell.org] On Behalf Of Ömer
> |  Sinan Agacan
> |  Sent: 09 February 2018 08:42
> |  To: ghc-devs 
> |  Subject: StgLint worth maintaining?
> |
> |  Hi,
> |
> |  I've been looking into some StgLint-related tickets:
> |
> |  - #13994: Found a StgLint problem and fixed, there's another problem
> |waiting to be fixed. Both related with the fact that after
> |unarisation we lose even more typing information and type
> |checks needs to be relaxed.
> |
> |  - #14116: StgLint failed to look through newtypes, and because
> |  coercions
> |are removed at that point it failed to type check. Solution
> |was to relax type checks.
> |
> |  - #5345:  Because `unsafeCoerce# is operationally no-op, and we don't
> |have coercions in STG, StgLint can't type check at all. The
> |commit message notes:
> |
> |> Fundamentally STG Lint is impossible, because
> |  unsafeCoerce#
> |> can randomise all the types.
> |
> |> This patch does a bit of fiddle faddling in StgLint which
> |> makes it a bit better, but it's a losing battle.
> |
> |  - #14117: Related with StgLint not keeping up with recent changes
> |  (join
> |points), because it's not enabled by default in
> |tests/validate.
> |
> |  - #14118: Related with the fact that pre- and post-unarise we have
> |different invariants in STG. Solution was to add a "unarise"
> |parameter and do different checks based on that.
> |
> |  - #14120: Again type checking errors. Commit for #14116 also fixes
> |  this.
> |The commits compares `typePrimRep`s of types instead of
> |comparing actual types (even this is not enough, see
> |  #13994).
> |
> |  All this of course took time to debug.
> |
> |  In addition, the new `StgCSE` pass makes transformations that trigger
> |  case alternative checks (and probably some other checks) because
> |  scrutinee and result won't have same types after the transformation
> |  described in `Note [Case 2: CSEing case binders]`.
> |
> |  There's also this comment in StgLint.hs
> |
> |  WARNING:
> |  
> |
> |  This module has suffered bit-rot; it is likely to yield lint
> |  errors
> |  for Stg code that is currently perfectly acceptable for code
> |  generation.  Solution: don't use it!  (KSW 2000-05).
> |
> |  It seems like it hasn't been used since 2000.
> |
> |  All this suggests that
> |
> |  - Checks related to types are impossible in StgLint. (see e.g. commit
> |messages in #5345, #1420, transformations done by unariser and
> |StgCSE)
> |
> |  - It's not enabled since 2000, which I think means that it's not
> |needed.
> |
> |  This makes me question whether it's worth maintaining. Maybe we should
> |  just remove it.
> |
> |  If we still want to keep we should decide on what it's supposed to do.
> |  Only invariants I can think of are:
> |
> |  - After unarise there should be no unboxed tuple and sum binders.
> |
> |unarise is a simple pass and does same thing to all binders, there
> |  are
> |no tricky cases so I'm not sure if we need to check this.
> |
> |  - Variables should be defined before use. I again don't know if this
> |should be checked, could this be useful for StgCSE?
> |
> |  So I think we should do one of these:
> |
> |  1. Remove StgLint.
> |
> |  2. Rewrite it to only check these two and nothing else, enable it in
> | validate (and in other build flavours that enable CoreLint).
> |
> |  What do you think? If you think we should keep StgLint, can you think
> |  of any other checks? If we could reach a consensus I'm hoping to
> |  update StgLint (or remove it).
> |
> |  Thanks,
> |
> |  Ömer
> |  ___
> |  ghc-devs mailing list
> |  ghc-devs@haskell.org
> |  https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.h
> |  askell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc-
> |  devs=04%7C01%7Csimonpj%40microsoft.com%7C9d9affa423c84c84a25908d5
> |  6f992d87%7Cee3303d7fb734b0c8589bcd847f1c277%7C1%7C0%7C6365376260479856
> |  64%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI
> |  6Ik1haWwifQ%3D%3D%7C-
> |  1=GZ4xMoVQGyQFZxhlODBqMnWoiZrV82pqOn2ZrbvDo4U%3D=0

StgLint worth maintaining?

2018-02-09 Thread Ömer Sinan Ağacan
Hi,

I've been looking into some StgLint-related tickets:

- #13994: Found a StgLint problem and fixed, there's another problem
  waiting to be fixed. Both related with the fact that after
  unarisation we lose even more typing information and type
  checks needs to be relaxed.

- #14116: StgLint failed to look through newtypes, and because coercions
  are removed at that point it failed to type check. Solution
  was to relax type checks.

- #5345:  Because `unsafeCoerce# is operationally no-op, and we don't
  have coercions in STG, StgLint can't type check at all. The
  commit message notes:

  > Fundamentally STG Lint is impossible, because unsafeCoerce#
  > can randomise all the types.

  > This patch does a bit of fiddle faddling in StgLint which
  > makes it a bit better, but it's a losing battle.

- #14117: Related with StgLint not keeping up with recent changes (join
  points), because it's not enabled by default in
  tests/validate.

- #14118: Related with the fact that pre- and post-unarise we have
  different invariants in STG. Solution was to add a "unarise"
  parameter and do different checks based on that.

- #14120: Again type checking errors. Commit for #14116 also fixes this.
  The commits compares `typePrimRep`s of types instead of
  comparing actual types (even this is not enough, see #13994).

All this of course took time to debug.

In addition, the new `StgCSE` pass makes transformations that trigger
case alternative checks (and probably some other checks) because
scrutinee and result won't have same types after the transformation
described in `Note [Case 2: CSEing case binders]`.

There's also this comment in StgLint.hs

WARNING:


This module has suffered bit-rot; it is likely to yield lint errors
for Stg code that is currently perfectly acceptable for code
generation.  Solution: don't use it!  (KSW 2000-05).

It seems like it hasn't been used since 2000.

All this suggests that

- Checks related to types are impossible in StgLint. (see e.g. commit
  messages in #5345, #1420, transformations done by unariser and
  StgCSE)

- It's not enabled since 2000, which I think means that it's not
  needed.

This makes me question whether it's worth maintaining. Maybe we should
just remove it.

If we still want to keep we should decide on what it's supposed to do.
Only invariants I can think of are:

- After unarise there should be no unboxed tuple and sum binders.

  unarise is a simple pass and does same thing to all binders, there are
  no tricky cases so I'm not sure if we need to check this.

- Variables should be defined before use. I again don't know if this
  should be checked, could this be useful for StgCSE?

So I think we should do one of these:

1. Remove StgLint.

2. Rewrite it to only check these two and nothing else, enable it in
   validate (and in other build flavours that enable CoreLint).

What do you think? If you think we should keep StgLint, can you think of
any other checks? If we could reach a consensus I'm hoping to update
StgLint (or remove it).

Thanks,

Ömer
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Slow validate failures

2017-03-24 Thread Ömer Sinan Ağacan
Hi all,

I have a patch that effects profiling code and I realized neither `./validate
--fast` nor `./validate` test the "prof" way so I tried `./validate --slow`. I
saw that even on a clean branch I get 250 unexpected failures and 6 unexpected
passes. I had a quick look at logs. Most of the failures seem to be in these
formats:

-
Compile failed (exit code 1) errors were:
T6005a.hs:1:1: fatal:
Cannot load -prof objects when GHC is built with -dynamic
To fix this, either:
  (1) Use -fexternal-interpreter, or
  (2) Build the program twice: once with -dynamic, and then
  with -prof using -osuf to set a different object file suffix.

-
Compile failed (exit code 1) errors were:

T5984_Lib.hs:3:8: error:
Could not find module ‘Prelude’
Perhaps you haven't installed the "p_dyn" libraries for
package ‘base-4.10.0.0’?
Use -v to see a list of the files searched for.

T5984_Lib.hs:5:1: error:
Could not find module ‘Language.Haskell.TH’
Perhaps you haven't installed the "p_dyn" libraries for
package ‘template-haskell-2.12.0.0’?
Use -v to see a list of the files searched for.

But there are also some serious-looking failures, like

=> hpc_fork(hpc) 5717 of 5834 [6, 245, 0]
...
+++ "/tmp/ghctest-yrj0el9g/test
spaces/../../libraries/ghc-compact/tests/compact_share.run/compact_share.run.stdout.normalised"
2017-03-24 00:38:02.486282332 +0300
@@ -1,4 +1,4 @@
 275599
-3801088
+6291456
 275599
-2228224
+3506176

So it seems at this point there's basically no realiable way to test profiling
changes. I was wondering if someone here know anything about these. If anyone's
interested, I pushed test output of `./validate --slow` here: (9.2M file)
https://gist.githubusercontent.com/osa1/7cbcc8303f1e213a10accf0bcd9b5ab2/raw/75371245bba2918f4ec97675abea9af661c77b25/gistfile1.txt

Ömer
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: A possible bug in ghc documentation

2017-03-20 Thread Ömer Sinan Ağacan
Hi Yi,

Thanks for reporting. I just fixed this.

Ömer

2017-03-20 4:29 GMT+03:00 yi lu :

> Hi all,
>
> Sorry to bother. I'm not sure if this is the right place to post this, but
> I find a possible bug in ghc documentation.
>
> http://downloads.haskell.org/~ghc/8.0.2/docs/html/users_
> guide/flags.html#language-options
>
> ^In 7.6.12, -XDeriveGeneric
> 
>  appears twice.
>
> If it is a bug, please fix. Thanks.
>
>
>
>
> Yi
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: 177 unexpected test failures on a new system -- is this yet another linker issue?

2016-11-11 Thread Ömer Sinan Ağacan
Sylvain, I tried your patch, here's the output:


cd "./th/T5976.run" &&  "/home/omer/haskell/ghc/inplace/test
spaces/ghc-stage2" -c T5976.hs -dcore-lint -dcmm-lint
-no-user-package-db -rtsopts -fno-warn-missed-specialisations
-fshow-warning-groups -dno-debug-output -XTemplateHaskell -package
template-haskell -fexternal-interpreter -v0
Actual stderr output differs from expected:
--- ./th/T5976.run/T5976.stderr.normalised  2016-11-11
16:22:02.247761214 -0500
+++ ./th/T5976.run/T5976.comp.stderr.normalised 2016-11-11
16:22:02.247761214 -0500
@@ -1,7 +1,4 @@
-
-T5976.hs:1:1:
-Exception when trying to run compile-time code:
-  bar
-CallStack (from HasCallStack):
-  error, called at T5976.hs:: in :Main
-Code: error ((++) "foo " error "bar")
+ghc-iserv.bin: internal loadArchive: invalid GNU-variant filename
`/SYM64/ ' found while reading
`/home/omer/haskell/ghc/libraries/ghc-prim/dist-install/build/libHSghc-prim-0.5.0.0.a'
+(GHC version 8.1.20161107 for x86_64_unknown_linux)
+Please report this as a GHC bug:  http://www.haskell.org/ghc/reportabug
+ghc: ghc-iserv terminated (-6)
*** unexpected failure for T5976(ext-interp)

Unexpected results from:
    TEST="T5976"

2016-11-11 12:02 GMT-05:00 Ömer Sinan Ağacan <omeraga...@gmail.com>:
> So I just tried validating on another system:
>
> > ghc git:(master) $ uname -a
> Linux linux-enrr.suse 4.1.34-33-default #1 SMP PREEMPT Thu Oct 20 08:03:29
> UTC 2016 (fe18aba) x86_64 x86_64 x86_64 GNU/Linux
>
> > ghc git:(master) $ gcc --version
> gcc (SUSE Linux) 4.8.5
> Copyright (C) 2015 Free Software Foundation, Inc.
> This is free software; see the source for copying conditions.  There is NO
> warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR 
> PURPOSE.
>
> > ghc git:(master) $ ld --version
> GNU ld (GNU Binutils; openSUSE Leap 42.1) 2.26.1
> Copyright (C) 2015 Free Software Foundation, Inc.
> This program is free software; you may redistribute it under the terms of
> the GNU General Public License version 3 or (at your option) a later
> version.
> This program has absolutely no warranty.
>
> It validated without any errors. So I can't reproduce it right now. I'll try
> the patch sometime later today when I have the other laptop with me.
>
> Sylvain, do you have any ideas on what difference may be causing this? I'm
> pasting gcc and ld versions but I'm not sure if they're relevant at all.
>
> 2016-11-11 11:55 GMT-05:00 Sylvain Henry <sylv...@haskus.fr>:
>> My bad, in fact we do.
>>
>> Could you try with the attached patch? It shows the failing filename in the
>> archive.
>>
>>
>> On 11/11/2016 17:18, Sylvain Henry wrote:
>>
>> It seems like we don't bypass the special filename "/" (symbol lookup table)
>> in rts/Linker.c
>>
>> https://en.wikipedia.org/wiki/Ar_(Unix)#System_V_.28or_GNU.29_variant
>>
>>
>> On 11/11/2016 16:49, Ömer Sinan Ağacan wrote:
>>
>> Ah, sorry, that line was truncated. I posted the output here:
>> https://gist.githubusercontent.com/osa1/ea72655b8369099e84a67e0949adca7e/raw/9e72cbfb859cb839f1898af39a46ff0896237d15/gistfile1.txt
>>
>> That line should be
>>
>> +ghc-iserv.bin: internal loadArchive: GNU-variant filename offset not found
>> while reading filename from
>> `/home/omer/haskell/ghc/libraries/ghc-prim/dist-install/build/libHSghc-prim-0.5.0.0.a'
>> +(GHC version 8.1.20161107 for x86_64_unknown_linux)
>> +Please report this as a GHC bug:  http://www.haskell.org/ghc/reportabug
>>
>>
>> 2016-11-11 0:52 GMT-05:00 Reid Barton <rwbar...@gmail.com>:
>>>
>>> On Thu, Nov 10, 2016 at 11:12 PM, Ömer Sinan Ağacan
>>> <omeraga...@gmail.com> wrote:
>>> > I'm trying to validate on a new system (not sure if related, but it has
>>> > gcc
>>> > 6.2.1 and ld 2.27.0), and I'm having 177 unexpected failures, most
>>> > (maybe
>>> > even
>>> > all) of them are similar to this one:
>>> >
>>> > => T5976(ext-interp) 1 of 1 [0, 0, 0]
>>> > cd "./th/T5976.run" &&  "/home/omer/haskell/ghc/inplace/test
>>> > spaces/ghc-stage2" -c T5976.hs -dcore-dno-debug-output -XTemplateHaskell
>>> > -package template-haskell -fexternal-interpreter -v0
>>> > Actual stderr output differs from expected:
>>> > --- ./th/T5976.run/T5976.stderr.normalised  2016-11-10
>>> > 23:01:39.351997560 -05

Re: 177 unexpected test failures on a new system -- is this yet another linker issue?

2016-11-11 Thread Ömer Sinan Ağacan
So I just tried validating on another system:

> ghc git:(master) $ uname -a
Linux linux-enrr.suse 4.1.34-33-default #1 SMP PREEMPT Thu Oct 20 08:03:29
UTC 2016 (fe18aba) x86_64 x86_64 x86_64 GNU/Linux

> ghc git:(master) $ gcc --version
gcc (SUSE Linux) 4.8.5
Copyright (C) 2015 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

> ghc git:(master) $ ld --version
GNU ld (GNU Binutils; openSUSE Leap 42.1) 2.26.1
Copyright (C) 2015 Free Software Foundation, Inc.
This program is free software; you may redistribute it under the terms of
the GNU General Public License version 3 or (at your option) a later
version.
This program has absolutely no warranty.

It validated without any errors. So I can't reproduce it right now. I'll try
the patch sometime later today when I have the other laptop with me.

Sylvain, do you have any ideas on what difference may be causing this? I'm
pasting gcc and ld versions but I'm not sure if they're relevant at all.

2016-11-11 11:55 GMT-05:00 Sylvain Henry <sylv...@haskus.fr>:
> My bad, in fact we do.
>
> Could you try with the attached patch? It shows the failing filename in the
> archive.
>
>
> On 11/11/2016 17:18, Sylvain Henry wrote:
>
> It seems like we don't bypass the special filename "/" (symbol lookup table)
> in rts/Linker.c
>
> https://en.wikipedia.org/wiki/Ar_(Unix)#System_V_.28or_GNU.29_variant
>
>
> On 11/11/2016 16:49, Ömer Sinan Ağacan wrote:
>
> Ah, sorry, that line was truncated. I posted the output here:
> https://gist.githubusercontent.com/osa1/ea72655b8369099e84a67e0949adca7e/raw/9e72cbfb859cb839f1898af39a46ff0896237d15/gistfile1.txt
>
> That line should be
>
> +ghc-iserv.bin: internal loadArchive: GNU-variant filename offset not found
> while reading filename from
> `/home/omer/haskell/ghc/libraries/ghc-prim/dist-install/build/libHSghc-prim-0.5.0.0.a'
> +(GHC version 8.1.20161107 for x86_64_unknown_linux)
> +Please report this as a GHC bug:  http://www.haskell.org/ghc/reportabug
>
>
> 2016-11-11 0:52 GMT-05:00 Reid Barton <rwbar...@gmail.com>:
>>
>> On Thu, Nov 10, 2016 at 11:12 PM, Ömer Sinan Ağacan
>> <omeraga...@gmail.com> wrote:
>> > I'm trying to validate on a new system (not sure if related, but it has
>> > gcc
>> > 6.2.1 and ld 2.27.0), and I'm having 177 unexpected failures, most
>> > (maybe
>> > even
>> > all) of them are similar to this one:
>> >
>> > => T5976(ext-interp) 1 of 1 [0, 0, 0]
>> > cd "./th/T5976.run" &&  "/home/omer/haskell/ghc/inplace/test
>> > spaces/ghc-stage2" -c T5976.hs -dcore-dno-debug-output -XTemplateHaskell
>> > -package template-haskell -fexternal-interpreter -v0
>> > Actual stderr output differs from expected:
>> > --- ./th/T5976.run/T5976.stderr.normalised  2016-11-10
>> > 23:01:39.351997560 -0500
>> > +++ ./th/T5976.run/T5976.comp.stderr.normalised 2016-11-10
>> > 23:01:39.351997560 -0500
>> > @@ -1,7 +1,4 @@
>> > -
>> > -T5976.hs:1:1:
>> > -Exception when trying to run compile-time code:
>> > -  bar
>> > -CallStack (from HasCallStack):
>> > -  error, called at T5976.hs:: in :Main
>> > -Code: error ((++) "foo " error "bar")
>> > +ghc-iserv.bin: internal loadArchive: GNU-variant filename offset
>> > not
>> > found while reading filename f
>>
>> Did this line get truncated? It might help to have the rest of it.
>>
>> Regards,
>> Reid Barton
>
>
>
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
>
>
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
>
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: 177 unexpected test failures on a new system -- is this yet another linker issue?

2016-11-11 Thread Ömer Sinan Ağacan
Ah, sorry, that line was truncated. I posted the output here:
https://gist.githubusercontent.com/osa1/ea72655b8369099e84a67e0949adca7e/raw/9e72cbfb859cb839f1898af39a46ff0896237d15/gistfile1.txt

That line should be

+ghc-iserv.bin: internal loadArchive: GNU-variant filename offset not
found while reading filename from
`/home/omer/haskell/ghc/libraries/ghc-prim/dist-install/build/libHSghc-prim-0.5.0.0.a'
+(GHC version 8.1.20161107 for x86_64_unknown_linux)
+Please report this as a GHC bug:  http://www.haskell.org/ghc/reportabug


2016-11-11 0:52 GMT-05:00 Reid Barton <rwbar...@gmail.com>:

> On Thu, Nov 10, 2016 at 11:12 PM, Ömer Sinan Ağacan
> <omeraga...@gmail.com> wrote:
> > I'm trying to validate on a new system (not sure if related, but it has
> gcc
> > 6.2.1 and ld 2.27.0), and I'm having 177 unexpected failures, most (maybe
> > even
> > all) of them are similar to this one:
> >
> > => T5976(ext-interp) 1 of 1 [0, 0, 0]
> > cd "./th/T5976.run" &&  "/home/omer/haskell/ghc/inplace/test
> > spaces/ghc-stage2" -c T5976.hs -dcore-dno-debug-output -XTemplateHaskell
> > -package template-haskell -fexternal-interpreter -v0
> > Actual stderr output differs from expected:
> > --- ./th/T5976.run/T5976.stderr.normalised  2016-11-10
> > 23:01:39.351997560 -0500
> > +++ ./th/T5976.run/T5976.comp.stderr.normalised 2016-11-10
> > 23:01:39.351997560 -0500
> > @@ -1,7 +1,4 @@
> > -
> > -T5976.hs:1:1:
> > -Exception when trying to run compile-time code:
> > -  bar
> > -CallStack (from HasCallStack):
> > -  error, called at T5976.hs:: in :Main
> > -Code: error ((++) "foo " error "bar")
> > +ghc-iserv.bin: internal loadArchive: GNU-variant filename offset not
> > found while reading filename f
>
> Did this line get truncated? It might help to have the rest of it.
>
> Regards,
> Reid Barton
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


177 unexpected test failures on a new system -- is this yet another linker issue?

2016-11-10 Thread Ömer Sinan Ağacan
I'm trying to validate on a new system (not sure if related, but it has gcc
6.2.1 and ld 2.27.0), and I'm having 177 unexpected failures, most (maybe
even
all) of them are similar to this one:

=> T5976(ext-interp) 1 of 1 [0, 0, 0]
cd "./th/T5976.run" &&  "/home/omer/haskell/ghc/inplace/test
spaces/ghc-stage2" -c T5976.hs -dcore-dno-debug-output -XTemplateHaskell
-package template-haskell -fexternal-interpreter -v0
Actual stderr output differs from expected:
--- ./th/T5976.run/T5976.stderr.normalised  2016-11-10
23:01:39.351997560 -0500
+++ ./th/T5976.run/T5976.comp.stderr.normalised 2016-11-10
23:01:39.351997560 -0500
@@ -1,7 +1,4 @@
-
-T5976.hs:1:1:
-Exception when trying to run compile-time code:
-  bar
-CallStack (from HasCallStack):
-  error, called at T5976.hs:: in :Main
-Code: error ((++) "foo " error "bar")
+ghc-iserv.bin: internal loadArchive: GNU-variant filename offset not
found while reading filename f
+(GHC version 8.1.20161107 for x86_64_unknown_linux)
+Please report this as a GHC bug:
http://www.haskell.org/ghc/reportabug
+ghc: ghc-iserv terminated (-6)
*** unexpected failure for T5976(ext-interp)

Does anyone know what is this about?
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Register Allocator Tests

2016-10-11 Thread Ömer Sinan Ağacan
> Is it not possible to unit test GHC?

You need to export functions you want to test, and then write a program that
tests those functions using the `ghc` package.

See
https://github.com/ghc/ghc/blob/master/testsuite/tests/unboxedsums/unboxedsums_unit_tests.hs
for an example.

2016-10-11 17:50 GMT-04:00 Thomas Jakway :

> I read somewhere that fixing the graph register allocator would be a good
> project so I thought I'd look into it. I couldn't find any tickets about it
> on Trac though so I was poking around for tests to see what (if anything)
> was wrong with it.
>
> After I sent that last email I googled around for how to write ghc unit
> tests and this
>  is
> the only thing I found.  Is it not possible to unit test GHC?  If not are
> there plans/discussions about this?  I think it'd help document the code
> base if nothing else and it'd be a good way to get my feet wet.
> On 10/11/2016 02:13 PM, Ben Gamari wrote:
>
> Thomas Jakway   writes:
>
>
> Can anyone point me to the register allocator tests (especially for the
> graph register allocator)?  Can't seem to find them and grepping doesn't
> turn up much (pretty much just
> testsuite/tests/codeGen/should_run/cgrun028.h).
>
>
> What sort of tests are you looking for in particular? I'm afraid all we
> have are regression tests covering the code generator as a whole.
>
> Cheers,
>
> - Ben
>
>
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: IRC: Logging #ghc with ircbrowse.net?

2016-09-10 Thread Ömer Sinan Ağacan
+1 from me. I don't have any preference to which service to use, as long as we
have a logger with good uptime. I was regularly checking the logs when it was
working.

2016-09-10 16:00 GMT-04:00 Ben Gamari :
> Hello GHC developers,
>
> In the past we have relied upon Phabricator's Chatlog application to log
> the #ghc freenode channel. While Chatlog did its job admirably, it seems
> that Phacility has deprecated it with no proposed replacement.
>
> Thankfully, there appears to be no shortage of alternatives. In
> particular, Chris Done's ircbrowse.net seems to offer a great deal of
> functionality and already mirrors a number of other Haskell-related
> channels.
>
> Would anyone be opposed to adding #ghc to the list of channels monitored
> by ircbrowse.net?
>
> Cheers,
>
> - Ben
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


-fno-warn lines in the lexer can be removed with Alex 3.1.5

2016-09-10 Thread Ömer Sinan Ağacan
I was working on the lexer today and realized that we can now remove
-fno-warn lines in the lexer if we're OK with requiring Alex >=3.1.5.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Feature request bounty

2016-09-08 Thread Ömer Sinan Ağacan
I updated implementation section of the wiki page
(https://ghc.haskell.org/trac/ghc/wiki/NewtypeOptimizationForGADTS).

2016-09-08 7:21 GMT-04:00 Simon Peyton Jones <simo...@microsoft.com>:
> Omer: yes you have it right.
>
> I agree with your remarks about follow-up simplifications.  That's jolly 
> annoying.
>
> I don't know a good path here.  I suppose we could make a simple STG 
> simplifier
>
> Stuff on email gets lost/buried. Would you like to extend the wiki page with 
> your implementation notes?  That would capture them.
>
> Simon
>
> | -Original Message-
> | From: Ömer Sinan Ağacan [mailto:omeraga...@gmail.com]
> | Sent: 08 September 2016 02:20
> | To: Simon Peyton Jones <simo...@microsoft.com>
> | Cc: ghc-devs <ghc-devs@haskell.org>
> | Subject: Re: Feature request bounty
> |
> | Simon,
> |
> | As far as I understand we want to do these two transformations:
> |
> | (when D is a newtype-like data type constructor)
> |
> |
> | First:
> | D arg1 arg2 ... argN
> | ==>
> | nv_arg (where nv_arg is the only non-void argument)
> |
> | (but we somehow need to bind other args or do substitution. If we do
> | this
> | Stg though we don't need to bind those args as unarise doesn't care
> | about
> | what a void argument is as long as it's void it gets rid of it and it
> | can
> | check void-ness by looking at Id's type)
> |
> | Second:
> | case  of
> |   D arg1 arg2 ... argN -> 
> | ==>
> | let arg1 = ...
> | arg2 = ...
> | arg3 = ...
> |  in 
> |  (we know only one of these args will be non-void, but all of them
> | should be
> |  bound as they can be referred in )
> |
> | Am I right?
> |
> | I think if we do this in Stg we lose some optimization opportunities and
> | generate ugly code. For example, if the first transformation happens in a
> | let-binding RHS maybe simplifier decides to inline it as it can't
> | duplicate work after the transformation. Similarly it can decide to
> | inline the non-void argument after second transformation which may lead
> | to further optimizations etc.
> |
> | For an example of an ugly code, suppose we had this:
> |
> | case  of
> |   D (T x) -> 
> |
> | in Stg this looks like
> |
> | case  of
> |   D v -> case v of
> |T x -> 
> |
> | So now if we do the second transformation we get
> |
> | let v =  in
> | case v of
> |   T x -> 
> |
> | but ideally we'd get
> |
> | case  of
> |   T x -> 
> |
> | I think simplifier would be able to do this after the second
> | transformation.
> | Am I making any sense?
> |
> | I have no idea how to do this in the simplifier without losing type
> | safety though...
> |
> | Are these two transformations also what you had in mind or do you have
> | something else?
> |
> | - Omer
> |
> | 2016-09-07 16:58 GMT-04:00 David Feuer <david.fe...@gmail.com>:
> | > I can't guarantee I'll be able to understand things well enough to
> | > take your advice, but I'd be willing to give it a shot. Where would be
> | > the right place to stick this? I am not at all familiar with the GHC
> | > code generation system. Also, what might I have to do to avoid forcing
> | > the same object repeatedly? If I have multiple such constructors, and
> | > someone does
> | >
> | > case x of
> | >   Con1 (Con2 (Con3 (Con4 y))) -> e
> | >
> | > I want to smash this down to something that looks like
> | >
> | > case x of y {
> | >   _ -> e }
> | >
> | > Do I need to worry about this, or will some later C-- pass take care of
> | it?
> | >
> | > On Wed, Sep 7, 2016 at 4:46 PM, Simon Peyton Jones
> | > <simo...@microsoft.com> wrote:
> | >> I can advise about how (see comment:9 of #1965).  I can only see how
> | to do it in an un-typed way in the back end.
> | >>
> | >> Simon
> | >>
> | >> | -Original Message-
> | >> | From: ghc-devs [mailto:ghc-devs-boun...@haskell.org] On Behalf Of
> | >> | David Feuer
> | >> | Sent: 07 September 2016 19:33
> | >> | To: ghc-devs <ghc-devs@haskell.org>
> | >> | Subject: Feature request bounty
> | >> |
> | >> | I'd like to place a small bounty on
> | >> | https://ghc.haskell.org/trac/ghc/wiki/NewtypeOptimizationForGADTS .
> | >> |
> | >> | If someone implements this request by GHC 8.2, I will buy them 24
> | >> | bottles of high-end root beer (e.g., Maine Root) or something of
> | >> |

Re: Can we mark DataCon ptrs only in allocation sites and not generate entry code?

2016-08-25 Thread Ömer Sinan Ağacan
> StgCmmCon.hs:240 reads
>  ; return (mkRhsInit dflags reg lf_info hp_plus_n) }
>
> what's that got to do with pointer tagging?

If I understand correctly, mkRhsInit returns a tagged pointer:

mkRhsInit :: DynFlags -> LocalReg -> LambdaFormInfo -> CmmExpr -> CmmAGraph
mkRhsInit dflags reg lf_info expr
  = mkAssign (CmmLocal reg) (addDynTag dflags expr (lfDynTag
dflags lf_info))

> But also we clearly must do so in the entry code for a data constructor

Why? It's not clear to me. If every pointer to a constructor is tagged then
maybe we don't need to enter constructors at all.

2016-08-25 15:41 GMT+00:00 Simon Peyton Jones :
> StgCmmCon.hs:240 reads
>  ; return (mkRhsInit dflags reg lf_info hp_plus_n) }
>
> what's that got to do with pointer tagging?
>
>
> But yes we need to do it in both places.  Consider
>
> f x xs = let y = x:xs
>  in g y
>
> We should tag y before passing it to g.  That's the StgCmmCon case.
>
>
> But also we clearly must do so in the entry code for a data constructor
>
>
> Simon
>
> |  -Original Message-
> |  From: ghc-devs [mailto:ghc-devs-boun...@haskell.org] On Behalf Of Ömer
> |  Sinan Agacan
> |  Sent: 25 August 2016 11:46
> |  To: ghc-devs 
> |  Subject: Can we mark DataCon ptrs only in allocation sites and not
> |  generate entry code?
> |
> |  As far as I can see in the native code compiler we mark DataCon
> |  pointers in two places:
> |
> |  1. In allocation sites (StgCmmCon.hs:240)
> |  2. In DataCon entry code (StgCmm.hs:244)
> |
> |  I was wondering why we can't get away with just doing (1). Can anyone
> |  give me
> |  an example where an allocation doesn't return a tagged pointer and we
> |  need to
> |  tag it in entry code? If every allocation returns a tagged pointer,
> |  then why do
> |  we need (2) ?
> |
> |  Thanks
> |  ___
> |  ghc-devs mailing list
> |  ghc-devs@haskell.org
> |  https://na01.safelinks.protection.outlook.com/?url=http%3a%2f%2fmail.h
> |  askell.org%2fcgi-bin%2fmailman%2flistinfo%2fghc-
> |  devs=01%7c01%7csimonpj%40microsoft.com%7cb1571fb2c4e24941998308d3
> |  ccd519bb%7c72f988bf86f141af91ab2d7cd011db47%7c1=ilDdLBFdfvzY1g6Z
> |  iuHyY6W%2bOIq2Y69f5YTgmpqwRO8%3d
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Can we mark DataCon ptrs only in allocation sites and not generate entry code?

2016-08-25 Thread Ömer Sinan Ağacan
As far as I can see in the native code compiler we mark DataCon
pointers in two places:

1. In allocation sites (StgCmmCon.hs:240)
2. In DataCon entry code (StgCmm.hs:244)

I was wondering why we can't get away with just doing (1). Can anyone give me
an example where an allocation doesn't return a tagged pointer and we need to
tag it in entry code? If every allocation returns a tagged pointer, then why do
we need (2) ?

Thanks
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Broken Build due to T1969

2016-08-17 Thread Ömer Sinan Ağacan
I can't reproduce it on my x86_64 Linux laptop when I boot GHC HEAD with GHC
7.10.2.

Anyway, feel free to revert 773e3aad (which disables the test) but I think
bumping the numbers a little bit is a better option here as that would at least
prevent things from getting worse.


I'm curious, does anyone know how much ram harbormaster has? I can't see from
the logs how many times GC is running during T1969, but maybe GC is running
less number of times which is causing the residency to increase.

I'm also wondering if there are more stable ways of testing residency. Can we
maybe do something like:

runMajorGC >> printResidency

In some specific locations in the compiler and only look for those in perf
tests? Or does anyone have any better ideas?

2016-08-17 18:05 GMT+00:00 Ömer Sinan Ağacan <omeraga...@gmail.com>:
> Hmm, it seems to be a 64bit Linux. Interestingly peak_megabytes_allocated is
> also different than numbers I'm getting on my 64bit Linux laptop. Maybe this 
> is
> because harbormaster is booting with GHC 7.10.3 and I'm booting with GHC 
> 8.0.1.
> I'm currently validating with 7.10.3 to see if I'll get the same numbers.
>
> 2016-08-17 15:17 GMT+00:00 Matthew Pickering <matthewtpicker...@gmail.com>:
>> I am just seeing it on harbourmaster.
>>
>> https://phabricator.haskell.org/harbormaster/build/12730/?l=100
>>
>> Matt
>>
>>
>>
>> On Wed, Aug 17, 2016 at 3:59 PM, Ömer Sinan Ağacan <omeraga...@gmail.com> 
>> wrote:
>>> Ugh. I validated that patch before committing and validated many times after
>>> that patch. Are you using a 32bit system? Maybe we should bump the numbers 
>>> for
>>> 32bit builds too.
>>>
>>> I'm hesitant to mark the test broken because I'm afraid that the numbers 
>>> will
>>> increase if we stop testing for allocations/residency completely. I think
>>> temporarily bumping numbers is better than temporarily disabling it.
>>>
>>> What are the numbers you're getting?
>>>
>>> 2016-08-17 14:16 GMT+00:00 Matthew Pickering <matthewtpicker...@gmail.com>:
>>>> Hi all,
>>>>
>>>> https://phabricator.haskell.org/rGHC773e3aadac4bbee9a0173ebc90ffdc9458a2a3a9
>>>>
>>>> broke the build by re-enabling the test T1969
>>>>
>>>> The ticket tracking this is: https://ghc.haskell.org/trac/ghc/ticket/12437
>>>>
>>>> Omer: Is it best to revert this patch and mark the test broken again?
>>>>
>>>> Matt
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Broken Build due to T1969

2016-08-17 Thread Ömer Sinan Ağacan
Ugh. I validated that patch before committing and validated many times after
that patch. Are you using a 32bit system? Maybe we should bump the numbers for
32bit builds too.

I'm hesitant to mark the test broken because I'm afraid that the numbers will
increase if we stop testing for allocations/residency completely. I think
temporarily bumping numbers is better than temporarily disabling it.

What are the numbers you're getting?

2016-08-17 14:16 GMT+00:00 Matthew Pickering :
> Hi all,
>
> https://phabricator.haskell.org/rGHC773e3aadac4bbee9a0173ebc90ffdc9458a2a3a9
>
> broke the build by re-enabling the test T1969
>
> The ticket tracking this is: https://ghc.haskell.org/trac/ghc/ticket/12437
>
> Omer: Is it best to revert this patch and mark the test broken again?
>
> Matt
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Improving cost center reports to show residency?

2016-08-08 Thread Ömer Sinan Ağacan
One thing that we can't currently see in cost center reports is the residency
and because of that cost centers can't be used for fixing memory leaks or
reducing max. memory residency. For example, I can have a function that returns
an `Int` but allocates lots of intermediate data on the way. In the reports I
see this function as allocates a lot, but it has no effect on my program's
residency (especially if it runs fast).

So I'm thinking of somehow using cost centers for reasoning about memory
residency. One idea is to print a summary after every major GC, by doing another
pass on the whole heap and recording attributions. This can be used for plotting
live data of cost centers over time. (like hp2ps but for cost centers)

Another idea is to add "residency" column in the profiling reports. Not sure how
to update this column in runtime though.

The main use case for me is fixing T1969, but of course this is a very general
solution.

Does anyone have any other ideas?
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


A function with absent demand on syntactically used argument?

2016-08-03 Thread Ömer Sinan Ağacan
I'm reading the code in WwLib that generates worker functions and I'm confused
about absent lets. Can anyone give an example function that has absent demand
on its argument even though the argument is syntactically used in the body?

I think we should add some examples to `Note [Absent errors]` in WwLib.hs.

One example came to my mind was something like

f x = ... undefined x ...

I'm guessing that if we were to generate a worker for this, we'd need to
generate an absent let for x in the worker function. But "undefined" has a
weird (polymorphic over both function and non-function types) type and I don't
know what's the demand signature of it (maybe we should document this too), so
I'm not sure.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: atomicModifyMutVar#: cas() is not inlined

2016-07-27 Thread Ömer Sinan Ağacan
To keep this thread up-to-date: https://phabricator.haskell.org/D2431

2016-07-27 14:08 GMT+00:00 Alex Biehl <alex.bi...@gmail.com>:
> There already is the CallishMachOp MO_cmpxchg in
> https://github.com/ghc/ghc/blob/5d98b8bf249fab9bb0be6c5d4e8ddd4578994abb/compiler/cmm/CmmMachOp.hs#L587
>
> All is left todo would be to add it to CmmParse.y (which has a TODO comment
> https://github.com/ghc/ghc/blob/714bebff44076061d0a719c4eda2cfd213b7ac3d/compiler/cmm/CmmParse.y#L992)
> then you could use that instead of the ccall.
>
> Ömer Sinan Ağacan <omeraga...@gmail.com> schrieb am Mi., 27. Juli 2016 um
> 11:15 Uhr:
>>
>> This is from definition of stg_atomicModifyMutVarzh(): (for threaded
>> runtime)
>>
>> retry:
>>   x = StgMutVar_var(mv);
>>   StgThunk_payload(z,1) = x;
>>   (h) = ccall cas(mv + SIZEOF_StgHeader + OFFSET_StgMutVar_var, x, y);
>>   if (h != x) { goto retry; }
>>
>> cas() is defined in includes/stg/SMP.h like this:
>>
>> EXTERN_INLINE StgWord
>> cas(StgVolatilePtr p, StgWord o, StgWord n)
>> {
>> return __sync_val_compare_and_swap(p, o, n);
>> }
>>
>> I think this is a function we want to make sure to inline everywhere,
>> right?
>> It's compiled to a single instruction on my x86_64 Linux laptop.
>>
>> >>> disassemble cas
>> Dump of assembler code for function cas:
>>0x00027240 <+0>: mov%rsi,%rax
>>0x00027243 <+3>: lock cmpxchg %rdx,(%rdi)
>>0x00027248 <+8>: retq
>> End of assembler dump.
>>
>> But it seems like it's not really inlined in Cmm functions:
>>
>> >>> disassemble stg_atomicModifyMutVarzh
>> Dump of assembler code for function stg_atomicModifyMutVarzh:
>>...
>>0x00046738 <+120>:   callq  0x27240 
>>...
>> End of assembler dump.
>>
>> I guess the problem is that we can't inline C code in Cmm, but I was
>> wondering
>> if this is important enough to try to fix maybe. Has anyone here looked at
>> some
>> profiling info to see how much time spent on this cas() call when threads
>> are
>> blocked in `atomicModifyIORef` etc?
>> ___
>> ghc-devs mailing list
>> ghc-devs@haskell.org
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


atomicModifyMutVar#: cas() is not inlined

2016-07-27 Thread Ömer Sinan Ağacan
This is from definition of stg_atomicModifyMutVarzh(): (for threaded runtime)

retry:
  x = StgMutVar_var(mv);
  StgThunk_payload(z,1) = x;
  (h) = ccall cas(mv + SIZEOF_StgHeader + OFFSET_StgMutVar_var, x, y);
  if (h != x) { goto retry; }

cas() is defined in includes/stg/SMP.h like this:

EXTERN_INLINE StgWord
cas(StgVolatilePtr p, StgWord o, StgWord n)
{
return __sync_val_compare_and_swap(p, o, n);
}

I think this is a function we want to make sure to inline everywhere, right?
It's compiled to a single instruction on my x86_64 Linux laptop.

>>> disassemble cas
Dump of assembler code for function cas:
   0x00027240 <+0>: mov%rsi,%rax
   0x00027243 <+3>: lock cmpxchg %rdx,(%rdi)
   0x00027248 <+8>: retq
End of assembler dump.

But it seems like it's not really inlined in Cmm functions:

>>> disassemble stg_atomicModifyMutVarzh
Dump of assembler code for function stg_atomicModifyMutVarzh:
   ...
   0x00046738 <+120>:   callq  0x27240 
   ...
End of assembler dump.

I guess the problem is that we can't inline C code in Cmm, but I was wondering
if this is important enough to try to fix maybe. Has anyone here looked at some
profiling info to see how much time spent on this cas() call when threads are
blocked in `atomicModifyIORef` etc?
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Supporting unboxed tuples in the bytecode compiler

2016-07-25 Thread Ömer Sinan Ağacan
Simon,

I was looking at the bytecode compiler to understand what's needed to support
unboxed tuples. It seems like if we generate bytecode after unarise it should be
very easy to support unboxed tuples, because after unarise we don't have any
unboxed tuple binders (all binders have UnaryType). So only places we see
unboxed tuples are:

- Return positions. We just push contents of the tuple to the stack.

- Case alternatives. The case expression in this case has to have this form:

case e1 of
  (# bndr1, bndr2, ..., bndrN #) -> RHS

  All binders will have unary types again. We just bind ids in the
  environment to their stack locations and compile RHS.

I think that's it. We also get unboxed sums support for free when we do this
after unarise.

What do you think about compiling to bytecode from STG? Have you considered that
before? Would that be a problem for GHCi's debugger or any other features?
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Note [Api annotations]

2016-07-20 Thread Ömer Sinan Ağacan
I see some weird comments like

--  - 'ApiAnnotation.AnnKeywordId' : 'ApiAnnotation.AnnOpen',
--  'ApiAnnotation.AnnVbar','ApiAnnotation.AnnComma',
--  'ApiAnnotation.AnnClose'

-- For details on above see note [Api annotations] in ApiAnnotation

in some files, but Note [Api annotations] in compiler/parser/ApiAnnotation.hs
doesn't say anything about those comments. Can someone update the note to
explain what are those comments for?
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: first unboxed sums patch is ready for reviews

2016-07-19 Thread Ömer Sinan Ağacan
I just got Simon's approval and I'm going to push it tomorrow (need to
add some more documentation) if no one asks for more things.

2016-07-09 12:55 GMT+00:00 Ömer Sinan Ağacan <omeraga...@gmail.com>:
> Hi all,
>
> I'm almost done with the unboxed sums patch and I'd like to get some reviews 
> at
> this point.
>
> https://phabricator.haskell.org/D2259
>
> Two key files in the patch are UnariseStg.hs and RepType.hs.
>
> For the example programs see files in testsuite/tests/unboxedsums/
>
> In addition to any comments about the code and documentation, it'd be
> appreciated if you tell me about some potential uses of unboxed sums, example
> programs, edge cases etc. so that I can test it a bit more and make sure the
> generated code is good.
>
> Thanks,
>
> Omer
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Parser changes for supporting top-level SCC annotations

2016-07-19 Thread Ömer Sinan Ağacan
2016-07-19 9:57 GMT+00:00 Simon Peyton Jones :
> Is there a ticket?  A wiki page with a specification?

I updated the user manual. There's no wiki, it just adds supports for
SCC annotations at the top-level.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Parser changes for supporting top-level SCC annotations

2016-07-19 Thread Ömer Sinan Ağacan
I managed to do this without introducing any new pragmas. I added a new
production that doesn't look for SCC annotations, for top-level expressions. I
then used it in decl_no_th and topdecl.

I'm not sure if I broke anything though. I'll validate in slow mode now.

Patch is here: http://phabricator.haskell.org/D2407

2016-06-01 12:55 GMT+00:00 Ömer Sinan Ağacan <omeraga...@gmail.com>:
> I was actually trying to avoid that, thinking that it'd be best if SCC 
> uniformly
> worked for top-levels and expressions. But then this new form:
>
> {-# SCC f "f_scc" #-}
>
> Would only work for toplevel SCCs.. So maybe it's OK to introduce a new pragma
> here.
>
> 2016-06-01 8:13 GMT-04:00 Richard Eisenberg <e...@cis.upenn.edu>:
>> What about just using a new pragma?
>>
>>> {-# SCC_FUNCTION f "f_scc" #-}
>>> f True = ...
>>> f False = ...
>>
>> The pragma takes the name of the function (a single identifier) and the name 
>> of the SCC. If you wish both to have the same name, you can leave off the 
>> SCC name.
>>
>> It seems worth it to me to introduce a new pragma here.
>>
>> Richard
>>
>> On May 30, 2016, at 3:14 PM, Ömer Sinan Ağacan <omeraga...@gmail.com> wrote:
>>
>>> I'm trying to support SCCs at the top-level. The implementation should be
>>> trivial except the parsing part turned out to be tricky. Since expressions 
>>> can
>>> appear at the top-level, after a {-# SCC ... #-} parser can't decide 
>>> whether to
>>> reduce the token in `sigdecl` to generate a `(LHsDecl (Sig (SCCSig ...)))` 
>>> or to
>>> keep shifting to parse an expression. As shifting is the default behavior 
>>> when a
>>> shift/reduce conflict happens, it's always trying to parse an expression, 
>>> which
>>> is always the wrong thing to do.
>>>
>>> Does anyone have any ideas on how to handle this?
>>>
>>> Motivation: Not having SCCs at the top level is becoming annoying real 
>>> quick.
>>> For simplest cases, it's possible to do this transformation:
>>>
>>>f x y = ...
>>>=>
>>>f = {-# SCC f #-} \x y -> ...
>>>
>>> However, it doesn't work when there's a `where` clause:
>>>
>>>f x y = 
>>>  where t = ...
>>>=>
>>>f = {-# SCC f #-} \x y -> 
>>>  where t = ...
>>>
>>> Or when we have a "equation style" definition:
>>>
>>>f (C1 ...) = ...
>>>f (C2 ...) = ...
>>>f (C3 ...) = ...
>>>...
>>>
>>> (usual solution is to rename `f` to `f'` and define a new `f` with a `SCC`)
>>> ___
>>> ghc-devs mailing list
>>> ghc-devs@haskell.org
>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Slow validate currently fails

2016-07-15 Thread Ömer Sinan Ağacan
Hi all,

Just wanted to say that HEAD (37aeff6) doesn't currently pass validate (in slow
mode). See details below.


 STAGE 2 TESTS 

Unexpected results from:
TEST="haddock.Cabal hpc_fork T4114c T4114d dynamic-paper"

SUMMARY for test run started at Fri Jul 15 11:19:15 2016 UTC
 0:43:36 spent to go through
5264 total tests, which gave rise to
   21166 test cases, of which
3817 were skipped

 197 had missing libraries
   16928 expected passes
 219 expected failures

   0 caused framework failures
   0 unexpected passes
   4 unexpected failures
   1 unexpected stat failures

Unexpected failures:
   /tmp/ghctest-7VEJbb/test
spaces/./dependent/should_compile/dynamic-paper.run  dynamic-paper
[exit code non-0] (profasm)
   /tmp/ghctest-7VEJbb/test   spaces/./driver/T4114c.run
T4114c [bad exit code] (ghci)
   /tmp/ghctest-7VEJbb/test   spaces/./driver/T4114d.run
T4114d [bad exit code] (ghci)
   /tmp/ghctest-7VEJbb/test
spaces/../../libraries/hpc/tests/fork/hpc_fork.run   hpc_fork [bad
heap profile] (profasm)

Unexpected stat failures:
   /tmp/ghctest-7VEJbb/test
spaces/./perf/haddock/haddock.Cabal.run  haddock.Cabal [stat not good
enough] (normal)


It validates in fast mode though (haven't tried the default mode).
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


testsuite: Test named X but want to use Y.hs as source

2016-07-10 Thread Ömer Sinan Ağacan
I have some number of test programs that I compile and run as usual. I also
want to run them using GHCi, with -fobject-code. So I tried this:

def just_ghci( name, opts ):
opts.only_ways = ['ghci']

test('unboxedsums1.ghci', just_ghci, compile_and_run, ['-fobject-code'])

Now, I don't have a file named `unboxedsums1.ghci.hs`, I want to use
`unboxedsums1.hs` and I already have a test named `unboxedsums1`.

Any ideas how to do this?

Thanks
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


first unboxed sums patch is ready for reviews

2016-07-09 Thread Ömer Sinan Ağacan
Hi all,

I'm almost done with the unboxed sums patch and I'd like to get some reviews at
this point.

https://phabricator.haskell.org/D2259

Two key files in the patch are UnariseStg.hs and RepType.hs.

For the example programs see files in testsuite/tests/unboxedsums/

In addition to any comments about the code and documentation, it'd be
appreciated if you tell me about some potential uses of unboxed sums, example
programs, edge cases etc. so that I can test it a bit more and make sure the
generated code is good.

Thanks,

Omer
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Testsuite cleaning

2016-06-24 Thread Ömer Sinan Ağacan
I also realized this after a rebase I did yesterday. Should be a recent thing.

2016-06-24 7:58 GMT+00:00 Simon Peyton Jones via ghc-devs
:
> Thomas
>
> During debugging I often compile a single test program
>
> ghc -c T1969.hs
>
> But the new testsuite setup doesn’t remove .hi and .o files before running a
> test, so
>
> make TEST=T1969
>
> says
>
> bytes allocated value is too low:
>
> …
>
> Deviation   T1969(normal) bytes allocated: -95.2 %
>
> Reason?  Compilation was not required!
>
> Non-perf tests fail in the same way
>
> +compilation IS NOT required
>
> *** unexpected failure for T11480b(normal)
>
> I’m sure this didn’t use to happen.
>
> It’s not fatal, because can manually remove those .o files, but it’s a bit
> of a nuisance.  Might it be easy to restore the old behaviour?
>
> Thanks
>
> Simon
>
>
>
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Does anyone know any easy-to-run compile-time benchmark suites?

2016-06-23 Thread Ömer Sinan Ağacan
Hi all,

I was wondering if anyone has or knows easy-to-run compile-time benchmarks? I'm
looking for something like nofib -- ideally after a fresh build I should be
able to just run `make` and get some numbers (mainly allocations) back.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


<    1   2   3   4   >