Re: How to use the compact regions API to write a compact to a file?

2020-05-22 Thread Shao, Cheng
Hi Matthew,

It's possible to use Data.Compact.Serialize to write a compact to a
file or read it back. Directly serializing via ByteStrings has also
been discussed before, see link below. Hope this helps!

https://hackage.haskell.org/package/compact
https://github.com/ezyang/compact/issues/3

On Fri, May 22, 2020 at 1:32 PM Matthew Pickering
 wrote:
>
> Dear devs,
>
> I have been under the impression for a while that you can write a
> compact region to a file and the functions in GHC.Compact.Serialized
> also seem to suggest this is possible.
>
> https://hackage.haskell.org/package/ghc-compact-0.1.0.0/docs/GHC-Compact-Serialized.html
>
> It seems like there are some functions missing from the API. There is
> "importCompactByteStrings"
> but no function in the API which produces ByteStrings. So in order to
> use this function you have to convert a SerializedCompact into a
> ByteString, but in what specific way?
>
> Has anyone got any code where they have tried to do this?
>
> Cheers,
>
> Matt
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Blocking MVar# primops not performing stack checks?

2020-02-26 Thread Shao, Cheng
Hi all,

When an MVar# primop blocks, it jumps to a function in
HeapStackCheck.cmm which pushes a RET_SMALL stack frame before
returning to the scheduler (e.g. the takeMVar# primop jumps to
stg_block_takemvar for stack adjustment). But these functions directly
bump Sp without checking for possible stack overflow, I wonder if it
is a bug?

Cheers,
Cheng
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: How to turn LHExpr GhcPs into CoreExpr

2020-01-23 Thread Shao, Cheng
How about using `hscCompileCoreExprHook` to intercept the `CoreExpr`
from the ghci pipeline? There exist GHC API to evaluate a String to a
ForeignHValue iirc; we are not interested in the final ForeignHValue
in this case, we just want the CoreExpr, and the logic of generating
and linking BCO can be discarded.

Cheers,
Cheng

On Thu, Jan 23, 2020 at 1:55 PM Ben Gamari  wrote:
>
> It is slightly disheartening that this relatively simple use-case requires 
> reaching so deeply into the typechecker.
>
> If there really exusts no easier interface then perhaps we should consider 
> adopting your elaborateExpr as part of the GHC API.
>
> Cheers,
>
> - Ben
>
> On January 23, 2020 4:04:03 AM EST, Richard Eisenberg  
> wrote:
>>
>> I don't know the exact semantics of the interactive context, etc., but that 
>> looks plausible. It won't give the *wrong* answer. :)
>>
>> Thanks for sharing!
>> Richard
>>
>> On Jan 23, 2020, at 4:52 AM, Yiyun Liu  wrote:
>>
>> Thank you all for your help! It turns out that I was missing the constraint 
>> solving and zonking step by desugaring the result of tcInferSigma directly.
>>
>> I have the implementation of the function here. Not sure if it's 100% 
>> correct but at least it works for all the examples I can come up with so far.
>>
>> - Yiyun
>>
>> On 1/22/20 7:09 AM, Andreas Klebinger wrote:
>>
>> I tried this for fun a while ago and ran into the issue of needing to 
>> provide a type environment containing Prelude and so on.
>> I gave up on that when some of the calls failed because I must have missed 
>> to set up some implicit state properly.
>> I didn't have an actual use case (only curiosity) so I didn't look further 
>> into it. If you do find a way please let me know.
>>
>> I would also support adding any missing functions to GHC-the-library to make 
>> this possible if any turn out to be required.
>>
>> As an alternative you could also use the GHCi approach of using a fake 
>> Module. This would allow you to copy whatever GHCi is doing.
>> But I expect that to be slower if you expect to process many such strings,
>>
>> Richard Eisenberg schrieb am 22.01.2020 um 10:36:
>>
>> You'll need to run the expression through the whole pipeline.
>>
>> 1. Parsing
>> 2. Renaming
>> 3. Type-checking
>>   3a. Constraint generation
>>   3b. Constraint solving
>>   3c. Zonking
>> 4. Desugaring
>>
>>
>>
>
> --
> Sent from my Android device with K-9 Mail. Please excuse my brevity.
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Cmm code of `id` function referring to `breakpoint`?

2019-02-05 Thread Shao, Cheng
Hi devs,

I just found that the Cmm code of `GHC.Base.id` refers to `breakpoint`
in the same module, however, in the Haskell source of `GHC.Base`, the
definition of `id` and `breakpoint` are totally unrelated:

```
id  :: a -> a
id x=  x

breakpoint :: a -> a
breakpoint r = r
```

And here's the pretty-printed Cmm code:

```
base_GHCziBase_id_entry() //  [R2]
  { []
  }
  {offset
chwa: // global
R2 = R2;
call base_GHCziBase_breakpoint_entry(R2) args: 8, res: 0, upd: 8;
  }
base_GHCziBase_breakpoint_entry() //  [R2]
  { []
  }
  {offset
chvW: // global
R1 = R2;
call stg_ap_0_fast(R1) args: 8, res: 0, upd: 8;
  }
```

This looks suspicious. I'm curious if this is intended behavior of ghc.

Regards,
Shao Cheng
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: MutVar# and GC

2019-01-17 Thread Shao, Cheng
Hi,

I believe it's mentioned here:

https://ghc.haskell.org/trac/ghc/wiki/Commentary/Rts/Storage/GC/RememberedSets

Regards,
Shao Cheng

On Fri, Jan 18, 2019, 12:34 PM chessai .  wrote:

> Ryan,
>
> That makes perfect sense, thanks. Is that documented explicitly anywhere?
> If not, I'd like to add the documentation to any place relevant.
>
> Thanks
>
> On Thu, Jan 17, 2019, 8:19 PM Ryan Yates 
>> Hi,
>>
>> Because GHC's GC is generational it needs a way to handle heap objects
>> from older generations that point into younger generations.  This only
>> happens when an older object is mutated to point to a younger object.  GHC
>> solves this by invoking the GC write barrier (not to be confused with write
>> barriers for memory synchronization) `dirty_MUT_VAR`.  This will add that
>> mutable object to a mutable list that will be traversed in minor GCs along
>> with young generation roots.  Additionally the write barrier will mark the
>> heap object as "dirty" to avoid adding it to the list more than once.
>>
>> Ryan
>>
>> On Thu, Jan 17, 2019 at 4:29 PM chessai .  wrote:
>>
>>> Devs,
>>>
>>> I've heard from a few friends that MutVars, TVars, etc. are more
>>> challenging for the garbage collector. I'm writing to ask if someone can
>>> answer: 1. Is this true, and 2: Why? I can't seem to find anything like a
>>> writeup or documentation that mentions this. The HeapObjects trac page also
>>> mentions nothing about these supposed difficulties that GC faces with
>>> mutable heap objects.
>>>
>>> Thanks
>>> ___
>>> ghc-devs mailing list
>>> ghc-devs@haskell.org
>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>>>
>> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: GHC (API?) question: GHC Core for Base libraries

2018-12-04 Thread Shao, Cheng
Indeed, the boot.sh script is likely what you are looking for. To
compile `base` and retrieve Core for it, you just need to set up an
empty package database, use the Setup.hs script in base to compile it,
and load your plugin via "--ghc-option=.." provided to `Setup
configure`.
On Wed, Dec 5, 2018 at 2:37 AM Bill Hallahan  wrote:
>
> Joachim Breitner's veggies(https://github.com/nomeata/veggies) project is a 
> good example of using a vanilla ghc installation to compile standard 
> libraries like base.
>
> Thanks Cheng, this looks interesting and helpful!  I'm still trying to 
> understand it fully, but it seems like the important pieces for building base 
> are in the boot and Setup scripts?
>
> The problem here is that you're using a stage 1 build, and stage 1 lacks 
> support for the bytecode backend used by TH, plugins, ghci, etc. A base build 
> with a stage 2 compiler should work.
>
> Thanks Brandon.  Is there a recommended/documented way to build base with a 
> stage 2 compiler?  I haven't been able to find anything about this on the 
> wiki.
>
> On Dec 3, 2018, at 11:10 PM, Brandon Allbery  wrote:
>
> The problem here is that you're using a stage 1 build, and stage 1 lacks 
> support for the bytecode backend used by TH, plugins, ghci, etc. A base build 
> with a stage 2 compiler should work.
>
> On Mon, Dec 3, 2018 at 9:11 PM Bill Hallahan  
> wrote:
>>
>> Hi,
>>
>> I'm writing a program analyzer that operates on GHC Core.  Currently, I'm 
>> using the GHC API to get Core from .hs files.  I'd like to be able to run 
>> this analysis on the standard libraries that come with GHC, which requires 
>> getting those as Core.
>>
>> Unfortunately, the build process for these libraries is not entirely 
>> straightforward, and relies on a make script.  I eventually came up with the 
>> plan of writing a GHC plugin, which, rather than performing any 
>> optimizations, would simply run the analysis, and then print the results out 
>> to a file.  I was able to write the plugin successfully, and test it on 
>> several files that were not from the base library.
>>
>> Then, i turned to modify the GhcLibHcOpts flag in mk/build.mk, so that the 
>> make script would call the plugin.  I ended up with the following:
>>
>> GhcLibHcOpts = -package-db /usr/local/lib/ghc-8.0.2/package.conf.d 
>> -package-db /Users/BillHallahan/.ghc/x86_64-darwin-8.0.2/package.conf.d 
>> -package hplugin -fplugin HPlugin.Plugin -v
>>
>> The two package databases are to get to (1) the GHC API and (2) the plugin 
>> ("hplugin") itself.  With this I get an error message, which I have not been 
>> able to find a way to resolve:
>>
>> "inplace/bin/ghc-stage1" -hisuf hi -osuf  o -hcsuf hc -static  -H32m -O 
>> -Wall  -this-unit-id ghc-prim-0.5.0.0 -hide-all-packages -i 
>> -ilibraries/ghc-prim/. -ilibraries/ghc-prim/dist-install/build 
>> -ilibraries/ghc-prim/dist-install/build/autogen 
>> -Ilibraries/ghc-prim/dist-install/build 
>> -Ilibraries/ghc-prim/dist-install/build/autogen -Ilibraries/ghc-prim/.
>> -optP-include 
>> -optPlibraries/ghc-prim/dist-install/build/autogen/cabal_macros.h 
>> -package-id rts -this-unit-id ghc-prim -XHaskell2010 -package-db 
>> /usr/local/lib/ghc-8.0.2/package.conf.d -package-db 
>> /Users/BillHallahan/.ghc/x86_64-darwin-8.0.2/package.conf.d -package hplugin 
>> -fplugin HPlugin.Plugin  -no-user-package-db -rtsopts -Wno-trustworthy-safe 
>> -Wno-deprecated-flags -Wnoncanonical-monad-instances  -odir 
>> libraries/ghc-prim/dist-install/build -hidir 
>> libraries/ghc-prim/dist-install/build -stubdir 
>> libraries/ghc-prim/dist-install/build -split-objs  -dynamic-too -c 
>> libraries/ghc-prim/./GHC/Types.hs -o 
>> libraries/ghc-prim/dist-install/build/GHC/Types.o -dyno 
>> libraries/ghc-prim/dist-install/build/GHC/Types.dyn_o
>> : not built for interactive use - can't load plugins 
>> (HPlugin.Plugin)
>> make[1]: *** [libraries/ghc-prim/dist-install/build/GHC/Types.o] Error 1
>> make: *** [all] Error 2
>>
>> So I'm now wondering (an answer to either of these two questions would be 
>> helpful):
>> (1) Is this a viable path?  That is, is it possible to use a plugin when 
>> building Base?  If so, does anyone know what I might be doing wrong/what 
>> could be causing this error message?
>> (2) Is there some other better/easier way I could get Core representations 
>> of the standard libraries?  I guess, in theory, it must be possible to 
>> compile the standard libraries with the GHC API, but I have no idea 
>> how/where to look to figure out how?
>>
>> Thanks,
>> Bill
>> ___
>> ghc-devs mailing list
>> ghc-devs@haskell.org
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
>
>
> --
> brandon s allbery kf8nh
> allber...@gmail.com
>
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing 

Re: GHC (API?) question: GHC Core for Base libraries

2018-12-03 Thread Shao, Cheng
Hi,

Joachim Breitner's veggies(https://github.com/nomeata/veggies) project is a
good example of using a vanilla ghc installation to compile standard
libraries like base.

On Tue, Dec 4, 2018, 10:11 AM Bill Hallahan 
wrote:

> Hi,
>
> I'm writing a program analyzer that operates on GHC Core.  Currently, I'm
> using the GHC API to get Core from .hs files.  I'd like to be able to run
> this analysis on the standard libraries that come with GHC, which requires
> getting those as Core.
>
> Unfortunately, the build process for these libraries is not entirely
> straightforward, and relies on a make script.  I eventually came up with
> the plan of writing a GHC plugin, which, rather than performing any
> optimizations, would simply run the analysis, and then print the results
> out to a file.  I was able to write the plugin successfully, and test it on
> several files that were *not* from the base library.
>
> Then, i turned to modify the GhcLibHcOpts flag in mk/build.mk, so that
> the make script would call the plugin.  I ended up with the following:
>
> GhcLibHcOpts = -package-db /usr/local/lib/ghc-8.0.2/package.conf.d
> -package-db /Users/BillHallahan/.ghc/x86_64-darwin-8.0.2/package.conf.d
> -package hplugin -fplugin HPlugin.Plugin -v
>
> The two package databases are to get to (1) the GHC API and (2) the plugin
> ("hplugin") itself.  With this I get an error message, which I have not
> been able to find a way to resolve:
>
> "inplace/bin/ghc-stage1" -hisuf hi -osuf  o -hcsuf hc -static  -H32m -O
> -Wall  -this-unit-id ghc-prim-0.5.0.0 -hide-all-packages -i
> -ilibraries/ghc-prim/. -ilibraries/ghc-prim/dist-install/build
> -ilibraries/ghc-prim/dist-install/build/autogen
> -Ilibraries/ghc-prim/dist-install/build
> -Ilibraries/ghc-prim/dist-install/build/autogen -Ilibraries/ghc-prim/.
> -optP-include
> -optPlibraries/ghc-prim/dist-install/build/autogen/cabal_macros.h
> -package-id rts -this-unit-id ghc-prim -XHaskell2010 -package-db
> /usr/local/lib/ghc-8.0.2/package.conf.d -package-db
> /Users/BillHallahan/.ghc/x86_64-darwin-8.0.2/package.conf.d -package
> hplugin -fplugin HPlugin.Plugin  -no-user-package-db -rtsopts
> -Wno-trustworthy-safe -Wno-deprecated-flags
> -Wnoncanonical-monad-instances  -odir libraries/ghc-prim/dist-install/build
> -hidir libraries/ghc-prim/dist-install/build -stubdir
> libraries/ghc-prim/dist-install/build -split-objs  -dynamic-too -c
> libraries/ghc-prim/./GHC/Types.hs -o
> libraries/ghc-prim/dist-install/build/GHC/Types.o -dyno
> libraries/ghc-prim/dist-install/build/GHC/Types.dyn_o
> : not built for interactive use - can't load plugins
> (HPlugin.Plugin)
> make[1]: *** [libraries/ghc-prim/dist-install/build/GHC/Types.o] Error 1
> make: *** [all] Error 2
>
> So I'm now wondering (an answer to either of these two questions would be
> helpful):
> (1) Is this a viable path?  That is, is it possible to use a plugin when
> building Base?  If so, does anyone know what I might be doing wrong/what
> could be causing this error message?
> (2) Is there some other better/easier way I could get Core representations
> of the standard libraries?  I guess, in theory, it must be possible to
> compile the standard libraries with the GHC API, but I have no idea
> how/where to look to figure out how?
>
> Thanks,
> Bill
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Does it sound a good idea to implement "backend plugins"?

2018-10-04 Thread Shao, Cheng
> A long time ago, I’ve tried to inject plugin logic to allows some control 
> over the driver pipeline (phase ordering) and hooking various code gen 
> related functions.
>
> See https://phabricator.haskell.org/D535

Cool! I haven't thoroughly read the history of that diff, but allowing
manipulation of a Hooks via a Plugin seems overkill in this case, and
even if one can do so, it still doesn't lead to the backend IR types;
one would need to use runPhaseHook and modify the behavior after a
CgGuts is generated, which unfortunately leads to quite some
boilerplate code.

> At that time I ran into issues that might simply not exist with plugins 
> anymore today, but I haven’t looked.

Interesting. I'll make sure to consult you in case I'm bitten by some
hidden issues when I actually implement it :)

> The whole design wasn’t quite right and injects everything into the dynflags. 
>  Also ghc wanted to be able to compile the plugin on the fly, but I needed 
> the plugin to be loaded very early during the startup phase to exert enough 
> control of the rest of the pipeline through the plugin.

Well, in the case of backend plugins, it isn't supposed to be a home
plugin to be compiled and used on the fly. A typical use case would be
compiling/installing the plugin to a standalone pkgdb, then used to
compile other packages.

>
> On 5 Oct 2018, at 1:52 AM, Shao, Cheng  wrote:
>
> Adding "pluggable backends" to spin up new targets seems to require quite a 
> bit of additional infrastructure for initialising a library directory and 
> package database. But there are probably more specific use cases that need 
> inspecting/modifying STG or Cmm where plugins would already be useful in 
> practice.
>
>
> I think setting up a new global libdir/pkgdb is beyond the scope of
> backend plugins. The user shall implement his/her own boot script to
> configure for the new architecture, generate relevant headers, run
> Cabal's Setup program to launch GHC with the plugin loaded.
>
> Hooks (or rather their locations in the pipeline) are rather ad hoc by 
> nature, but for Asterius a hook that takes Cmm and takes over from there 
> seems like a reasonable approach given the current state of things. I think 
> the Cmm hook you implemented (or something similar) would be perfectly 
> acceptable to use for now.
>
>
> For the use case of asterius itself, indeed Hooks already fit the use
> case for now. But since we seek to upstream our newly added features
> in our ghc fork back to ghc hq, we should upstream those changes early
> and make them more principled. Compared to Hooks, I prefer to move to
> Plugins entirely since:
>
> * Plugins are more composable, you can load multiple plugins in one
> ghc invocation. Hooks are not.
> * If I implement the same mechanisms in Plugins, this can be
> beneficial to other projects. Currently, in asterius, everything works
> via a pile of hacks upon hacks in ghc-toolkit, and it's not good for
> reuse.
> * The newly added backend plugins shouldn't have visible
> correctness/performance impact if they're not used, and it's just a
> few local modifications in the ghc codebase.
>
> On Thu, Oct 4, 2018 at 3:56 PM Shao, Cheng  wrote:
>
>
> Hi all,
>
>
> I'm thinking of adding "backend plugins" in the current Plugins
>
> mechanism which allows one to inspect/modify the IRs post simplifier
>
> pass (STG/Cmm), similar to the recently added source plugins for HsSyn
>
> IRs. This can be useful for anyone creating a custom GHC backend to
>
> target an experimental platform (e.g. the Asterius compiler which
>
> targets WebAssembly), and previously in order to retrieve those IRs
>
> from the regular pipeline, we need to use Hooks which is somewhat
>
> hacky.
>
>
> Does this sound a good idea to you? If so, I can open a trac ticket
>
> and a Phab diff for this feature.
>
>
> Best,
>
> Shao Cheng
>
> ___
>
> ghc-devs mailing list
>
> ghc-devs@haskell.org
>
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Does it sound a good idea to implement "backend plugins"?

2018-10-04 Thread Shao, Cheng
> Adding "pluggable backends" to spin up new targets seems to require quite a 
> bit of additional infrastructure for initialising a library directory and 
> package database. But there are probably more specific use cases that need 
> inspecting/modifying STG or Cmm where plugins would already be useful in 
> practice.

I think setting up a new global libdir/pkgdb is beyond the scope of
backend plugins. The user shall implement his/her own boot script to
configure for the new architecture, generate relevant headers, run
Cabal's Setup program to launch GHC with the plugin loaded.

> Hooks (or rather their locations in the pipeline) are rather ad hoc by 
> nature, but for Asterius a hook that takes Cmm and takes over from there 
> seems like a reasonable approach given the current state of things. I think 
> the Cmm hook you implemented (or something similar) would be perfectly 
> acceptable to use for now.

For the use case of asterius itself, indeed Hooks already fit the use
case for now. But since we seek to upstream our newly added features
in our ghc fork back to ghc hq, we should upstream those changes early
and make them more principled. Compared to Hooks, I prefer to move to
Plugins entirely since:

* Plugins are more composable, you can load multiple plugins in one
ghc invocation. Hooks are not.
* If I implement the same mechanisms in Plugins, this can be
beneficial to other projects. Currently, in asterius, everything works
via a pile of hacks upon hacks in ghc-toolkit, and it's not good for
reuse.
* The newly added backend plugins shouldn't have visible
correctness/performance impact if they're not used, and it's just a
few local modifications in the ghc codebase.

> On Thu, Oct 4, 2018 at 3:56 PM Shao, Cheng  wrote:
>>
>> Hi all,
>>
>> I'm thinking of adding "backend plugins" in the current Plugins
>> mechanism which allows one to inspect/modify the IRs post simplifier
>> pass (STG/Cmm), similar to the recently added source plugins for HsSyn
>> IRs. This can be useful for anyone creating a custom GHC backend to
>> target an experimental platform (e.g. the Asterius compiler which
>> targets WebAssembly), and previously in order to retrieve those IRs
>> from the regular pipeline, we need to use Hooks which is somewhat
>> hacky.
>>
>> Does this sound a good idea to you? If so, I can open a trac ticket
>> and a Phab diff for this feature.
>>
>> Best,
>> Shao Cheng
>> ___
>> ghc-devs mailing list
>> ghc-devs@haskell.org
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Does it sound a good idea to implement "backend plugins"?

2018-10-04 Thread Shao, Cheng
Hi all,

I'm thinking of adding "backend plugins" in the current Plugins
mechanism which allows one to inspect/modify the IRs post simplifier
pass (STG/Cmm), similar to the recently added source plugins for HsSyn
IRs. This can be useful for anyone creating a custom GHC backend to
target an experimental platform (e.g. the Asterius compiler which
targets WebAssembly), and previously in order to retrieve those IRs
from the regular pipeline, we need to use Hooks which is somewhat
hacky.

Does this sound a good idea to you? If so, I can open a trac ticket
and a Phab diff for this feature.

Best,
Shao Cheng
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: cc1plus.exe of bundled mingw-w64 segfaults

2018-09-02 Thread Shao, Cheng
I managed to build a ghc bindist which bundles gcc 8.2.0, and
cc1plus.exe works as intended now. However, I'm getting linker errors
like:

```
ghc.EXE: unable to load package `ghc-prim-0.5.3'
ghc.EXE:  |
C:\Users\Think\AppData\Local\Programs\stack\x86_64-windows\ghc-8.7.20180902\mingw\x86_64-w64-mingw32\lib\libmingw32.a:
unknown symbol `__acrt_iob_func'
ghc.EXE:  |
C:\Users\Think\AppData\Local\Programs\stack\x86_64-windows\ghc-8.7.20180902\mingw\x86_64-w64-mingw32\lib\libmingwex.a:
unknown symbol `__mingw_raise_matherr'
ghc.EXE:  |
C:\Users\Think\AppData\Local\Programs\stack\x86_64-windows\ghc-8.7.20180902\lib\ghc-prim-0.5.3\HSghc-prim-0.5.3.o:
unknown symbol `exp'
```

I checked ghc-prim.cabal, but saw that on Windows ghc-prim already
links with libmingw32.a/libmingwex.a, which is supposed to provide
`__mingw_raise_matherr`, etc. Is there any other library to link, or
something else has gone wrong? Thank you.
__
On Sun, Sep 2, 2018 at 10:06 PM, Shao, Cheng  wrote:
> Hi folks,
>
> I'm building a Haskell/C++ hybrid project with a recent revision of
> ghc on Windows, and noticed that cc1plus.exe always segfaults. The
> same code builds fine on AppVeyor however and currently I have no clue
> why it ceases to work on my machine. Is anyone else experiencing a
> similar problem? I'm not sure if it's worth a trac ticket.
>
> Using cc1plus.exe from a newer version of mingw-w64-x86_64-gcc works
> on my machine, but weird linker errors arise when I attempt to use the
> newer gcc toolchain to override the bundled one. Is there a way to
> manually specify mingw-w64 tarball version/urls when configuring ghc?
> I notice there's an "--enable-distro-toolchain" flag, does it work
> with building a bindist? Thank you.
>
> Regards,
> Shao Cheng
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


cc1plus.exe of bundled mingw-w64 segfaults

2018-09-02 Thread Shao, Cheng
Hi folks,

I'm building a Haskell/C++ hybrid project with a recent revision of
ghc on Windows, and noticed that cc1plus.exe always segfaults. The
same code builds fine on AppVeyor however and currently I have no clue
why it ceases to work on my machine. Is anyone else experiencing a
similar problem? I'm not sure if it's worth a trac ticket.

Using cc1plus.exe from a newer version of mingw-w64-x86_64-gcc works
on my machine, but weird linker errors arise when I attempt to use the
newer gcc toolchain to override the bundled one. Is there a way to
manually specify mingw-w64 tarball version/urls when configuring ghc?
I notice there's an "--enable-distro-toolchain" flag, does it work
with building a bindist? Thank you.

Regards,
Shao Cheng
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Non-Reinstallable packages

2018-08-06 Thread Shao, Cheng
Hi,

IIRC those packages can be "reinstalled", just build & register into a
fresh package database and add it to the pkgdb stack, ghc can shadow
the ones in the global pkgdb.

Regards,
Shao Cheng

On Tue, Aug 7, 2018 at 10:39 AM, Moritz Angermann
 wrote:
> Dear friends,
>
> we have a set of non-reinstallable packages with GHC, these
> include iirc template-haskell, and some other.  I've got
> a few questions concerning those:
>
> - do we have a complete up-to-date list of those?
> - why can't we reinstall them (let's assume we use the
>  identical version for now; and don't upgrade)
> - does this also hold if we essentially build a stage3
>  compiler with packages?
>
> Our usual build process is:
> 1. take a boost-strap compiler, which doesn't need to have
>   the same version as the final compiler.
> 2. build the libraries necessary to build the stage1 compiler
>   while ensuring we build some extra libraries as well,
>   so we don't have to rely on those shipped with the boot-strap
>   compiler.
> 3. use the stage1 compiler to build all libraries we want to ship
>   with the stage2 compiler; and build the stage2 compiler.
>
> Now I do understand that the stage1 compiler could potentially be
> tainted by the boot-strap compiler and as such yield different
> libraries compared to what the stage2 compiler would yield.
>
> Shouldn't rebuilding any library with the stage1 compiler yield the
> same libraries these days?
>
> If the boot-strap compiler is the same version as the one we build,
> shouldn't the stage2 compiler be capable of building good enough
> libraries as well so that we can reinstall them?
>
> What I ideally would like to have is a minimal compiler:
> ghc + rts; than keep building all the lirbaries from ground up.
>
> A potential problem I see is that if we use dynamic libraries and
> get into TH, we could run into issues where we want to link libraries
> that are different to the ones that the ghc binary links against.
> Would this also hold if we used `-fexternal-interpreter` only?
>
> Cheers,
> Moritz
>
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Proposal: Professionalizing GHC Development

2018-04-01 Thread Shao, Cheng
Compiling GHC on a blockchain may not be economical, but running
GHC-compiled programs on a blockchain is definitely a great idea! I've even
come up with a paper title: A Secure Decentralized Transactional
Implementation of Spinless Tagless G-machine, aka Haskoin!

Time to recruiting a few engineers, write a white paper and raise a seed
round :)

On Sun, Apr 1, 2018 at 1:33 PM, David Kraeutmann  wrote:

> Leveraging the blockchain to compile GHC is a great idea!
>
> Unfortunately the proof-of-work algorithm is still just wasted cycles.
>
> On Sun, 1 Apr 2018, 07:28 ,  wrote:
>
>> Overall this is a great proposal; glad we're finally modernizing! Still,
>> it's got a pretty steep price tag - maybe we can offset costs with an
>> I.C.O.? ("GHC Coin"?)
>>
>>
>> > El 1 abr 2018, a las 00:56, Gershom B  escribió:
>> >
>> > Fellow Haskellers,
>> >
>> > Recently there has been much work into creating a better and more
>> > professional GHC development process, including in the form of DevOps
>> > infrastructure, scheduled releases and governance, etc. But much
>> > remains to be done. There continues to be concern about the lack of
>> > use of industry-standard tools. For example, GHC development is tied
>> > to Phabricator, which is a custom product originally developed for
>> > in-house use by an obscure startup. GHC development is documented on a
>> > wiki still -- ancient technology, not appropriate for 2018. Wiki
>> > syntax for documentation needs to be replaced by the only modern
>> > standard -- github flavored markdown. Trac itself is ancient
>> > technology, dating to 2003, well before anybody knew how to program
>> > real software. It provides no support for all the most important
>> > aspects of software development -- Kanban boards, sprint management,
>> > or even burndown charts.
>> >
>> > What is necessary is an integrated solution that holistically
>> > addresses all aspects of development, fostering a DevOps culture,
>> > embracing cloud-first, agile-first, test-first, disrupt-first
>> > principles, and with an
>> > ironclad SLA. Rather than homegrown solutions, we need a GHC
>> > development process that utilizes tools and procedures already
>> > familiar to regular developers. Cross-sectional feature comparison
>> > analysis yields a clear front-runner -- Visual Studio Team Services.
>> >
>> > VSTS is a recognized Leader in the Gartner Magic Quadrant for
>> > Enterprise Agile Planning tools. It lets us migrate from custom git
>> > hosting to a more reliable source control system -- Team Foundation
>> > Version Control. By enforcing the locking of checked-out files, we can
>> > prevent the sorts of overlap between different patches that occur in
>> > the current distributed version management system, and coordinate
>> > tightly between developers, enabling and fostering T-shaped skills.
>> > Team Build also lets us migrate from antiquated makefiles to modern,
>> > industry-standard technology -- XML descriptions of build processes
>> > that integrate automatically with tracking of PBIs (product backlog
>> > items), and one-button release management.
>> >
>> > In terms of documentation, rather than deal with the subtleties of
>> > different markdown implementations and the confusing world of
>> > restructured text, we can utilize the full power of Word, including
>> > SharePoint integration as well as Office 365 capabilities, and
>> integration
>> > with Microsoft Teams, the chat-based workspace for collaboration. This
>> > enables much more effective cross-team collaboration with product and
>> > marketing divisions.
>> >
>> > One of the most exciting features of VSTS is powerful extensibility,
>> > with APIs offered in both major programming paradigms in use today --
>> > JVM and .NET. The core organizational principle for full application
>> > lifecycle management is a single data construct -- the "work item"
>> > which documentation informs us "represents a thing," which can be
>> > anything that "a user can imagine." The power of work items comes
>> > through their extensible XML representation. Work items are combined
>> > into a Process Template, with such powerful Process Templates
>> > available as Agile, Scrum, and CMMI. VSTS will also allow us to
>> > analyze GHC Developer team performance with an integrated reporting
>> > data warehouse that uses a cube.
>> >
>> > Pricing for up to 100 users is $750 a month. Individual developers can
>> > also purchase subscriptions to Visual Studio Professional for $45 a
>> > month. I suggest we start directing resources towards a transition. I
>> > imagine all work to accomplish this could be done within a year, and
>> > by next April 1, the GHC development process will be almost
>> > unrecognizable from that today.
>> >
>> > Regards,
>> > Gershom
>> > ___
>> > ghc-devs mailing list
>> > ghc-devs@haskell.org
>> > 

Re: Windows

2018-03-26 Thread Shao, Cheng
Hi Simon,

Running “C:\msys64\msys2_shell.cmd -mingw64 -mintty" has the same effect as
clicking on the "MinGW-w64 Win64 Shell" shortcut. It is the proper way to
start an mingw64 shell. If you have run "pacman -S" to update the MSYS2
packages before, then the old shortcuts setup by the MSYS2 installer may
cease to function, and you can put a new shortcut on your desktop with that
command.

On Mon, Mar 26, 2018 at 6:02 PM, Simon Peyton Jones <simo...@microsoft.com>
wrote:

> If the build environment is managed by an MSYS2 installation, then the
> MinGW64 shell startup script automatically sets up "MSYSTEM" for you. It
> can be launched like "C:\msys64\msys2_shell.cmd -mingw64 -mintty".
>
> Well I just followed the Method A instructions at
>
> https://ghc.haskell.org/trac/ghc/wiki/Building/Preparation/Windows
>
>
>
> Are you saying that I should run “C:\msys64\msys2_shell.cmd -mingw64
> -mintty" just once, after installing?  Or repeatedly?  Or that I should
> somehow us it as my main shell?  And what does that commend actually do?
>
> Sorry to be dense
>
>
>
> Simon
>
>
>
> *From:* ghc-devs <ghc-devs-boun...@haskell.org> *On Behalf Of *Shao, Cheng
> *Sent:* 26 March 2018 10:59
> *To:* ghc-devs@haskell.org
> *Subject:* Re: Windows
>
>
>
> Hi Simon,
>
>
>
> If the build environment is managed by an MSYS2 installation, then the
> MinGW64 shell startup script automatically sets up "MSYSTEM" for you. It
> can be launched like "C:\msys64\msys2_shell.cmd -mingw64 -mintty".
>
>
>
> On Mon, Mar 26, 2018 at 5:46 PM, Simon Peyton Jones via ghc-devs <
> ghc-devs@haskell.org> wrote:
>
> Making it part of the error message would be v helpful.
>
> I have added a section to "Troubleshooting" on
> https://ghc.haskell.org/trac/ghc/wiki/Building/Preparation/Windows
>
> But it should really be part of the instructions higher up to sa
> export MSYSTEM=MINGW64
>
> Might someone do that?  I wasn't quite sure where
>
> Simon
>
>
> |  -Original Message-
> |  From: ghc-devs <ghc-devs-boun...@haskell.org> On Behalf Of Ben Gamari
> |  Sent: 24 March 2018 16:42
> |  To: Gabor Greif <ggr...@gmail.com>; loneti...@gmail.com
> |  Cc: ghc-devs@haskell.org
> |  Subject: Re: Windows
> |
> |  Gabor Greif <ggr...@gmail.com> writes:
> |
> |  > Just an idea...
> |  >
> |  > could this hint be part of the `configure` error message?
> |  >
> |  Indeed. See D4526.
> |
> |  Cheers,
> |
> |  - Ben
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Windows

2018-03-26 Thread Shao, Cheng
Hi Simon,

If the build environment is managed by an MSYS2 installation, then the
MinGW64 shell startup script automatically sets up "MSYSTEM" for you. It
can be launched like "C:\msys64\msys2_shell.cmd -mingw64 -mintty".

On Mon, Mar 26, 2018 at 5:46 PM, Simon Peyton Jones via ghc-devs <
ghc-devs@haskell.org> wrote:

> Making it part of the error message would be v helpful.
>
> I have added a section to "Troubleshooting" on
> https://ghc.haskell.org/trac/ghc/wiki/Building/Preparation/Windows
>
> But it should really be part of the instructions higher up to sa
> export MSYSTEM=MINGW64
>
> Might someone do that?  I wasn't quite sure where
>
> Simon
>
> |  -Original Message-
> |  From: ghc-devs  On Behalf Of Ben Gamari
> |  Sent: 24 March 2018 16:42
> |  To: Gabor Greif ; loneti...@gmail.com
> |  Cc: ghc-devs@haskell.org
> |  Subject: Re: Windows
> |
> |  Gabor Greif  writes:
> |
> |  > Just an idea...
> |  >
> |  > could this hint be part of the `configure` error message?
> |  >
> |  Indeed. See D4526.
> |
> |  Cheers,
> |
> |  - Ben
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Is "cml_cont" of CmmCall used in practice?

2018-03-17 Thread Shao, Cheng
Hi all,

Is the "cml_cont" field of the CmmCall variant is really used in practice?
I traversed the output of raw Cmm produced by ghc compiling the whole base
package, but the value of cml_cont is always Nothing.

Regards,
Shao Cheng
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Custom ghcPrimIface for cross-compilation?

2018-03-15 Thread Shao Cheng
Hi all, is it possible for a 64-bit ghc to emit 32-bit code, if I supply a
custom ghcPrimIface via Hooks and also modify the platform flags in
DynFlags? The module does not import Prelude and has no dependencies other
than GHC.Prim.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Extracting representation from GHC

2018-01-19 Thread Shao Cheng
Hi,

IIRC you can already use hscFrontendHook in the DynFlags hooks to retrieve
TcGblEnv, and with a little bit of work, also HsParsedModule.

Regards,
Shao Cheng

On Fri, Jan 19, 2018, 5:41 PM Matthew Pickering <matthewtpicker...@gmail.com>
wrote:

> I have too wanted this in the past and made a post to a similar effect
> on the mailing list 6 months ago.
>
> https://mail.haskell.org/pipermail/ghc-devs/2017-July/014427.html
>
> It references this proposal for a similar feature.
>
> https://ghc.haskell.org/trac/ghc/wiki/FrontendPluginsProposal#no1
>
> If you would be glad to implement it then feel free to add me as a
> reviewer.
>
> Cheers,
>
> Matt
>
> On Fri, Jan 19, 2018 at 9:35 AM, Németh Boldizsár <nbo...@elte.hu> wrote:
> > Dear GHC Developers,
> >
> > I would like to ask your opinion on my ideas to make it easier to use
> > development tools with GHC.
> >
> > In the past when working on a Haskell refactoring tool I relied on using
> the
> > GHC API for parsing and type checking Haskell sources. I extracted the
> > representation and performed analysis and transformation on it as it was
> > needed. However using the refactorer would be easier if it could work
> with
> > build tools.
> >
> > To do this, my idea is to instruct GHC with a compilation flag to give
> out
> > its internal representation of the source code. Most build tools let the
> > user to configure the GHC flags so the refactoring tool would be usable
> in
> > any build infrastructure. I'm thinking of using the pre-existing plugin
> > architecture and adding two new fields to the Plugin datastructure. One
> > would be called with the parsed representation (HsParsedModule) when
> parsing
> > succeeds, another with the result of the type checking (TcGblEnv) when
> type
> > checking is finished.
> >
> > What do you think about this solution?
> >
> > Boldizsár
> >
> > (ps: My first idea was using frontend plugins, but I could not access the
> > representation from there and --frontend flag changed GHC compilation
> mode.)
> >
> > ___
> > ghc-devs mailing list
> > ghc-devs@haskell.org
> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re-compiling wired-in packages

2017-04-24 Thread Shao Cheng
Dear friends,

I'm trying to implement a new codegen for ghc, starting by using the ghc
api to compile modules to STG/Cmm. In order to support importing Prelude, I
also need the STG/Cmm representations of wired-in packages like base,
ghc-prim, etc, and I guess that means I need to re-compile them.

I can roughly think of two approaches:

* Load the `ModIface` of wired-in packages, which are tidied Core modules,
and convert to STG/Cmm
* Get the library sources from ghc repo and launch a regular compilation.

I haven't succeeded in either approach. For the first one, I haven't found
a way to convert `ModIface` of external package to regular compilation
targets; for the second one, regular "cabal installs" won't do the trick,
seems compiling wired-in packages require quite some magic.

What is the preferred way of re-compiling wired-in packages and retrieving
their STG/Cmm representations? Thank a lot.

Cheers,
Shao Cheng
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs