I believe there is a bit of misconception about what requires a new backend or not. GHC is a bunch of different intermediate representations from which one can take off to build backends. The STG, or Cmm ones are the most popular. All our Native Code Generators and the LLVM code gen take off from the Cmm one. Whether or not that is the correct input representation for your target largely depends on the target and design of the codegenerator. GHCJS takes off from STG, and so does Csaba's GRIN work via the external STG I believe. IIRC Asterius takes off from Cmm. I don't remember the details about Eta.
Why fork? Do you want to deal with GHC, and GHC's development? If not, fork. Do you want to have to keep up with GHC's development? Maybe not fork. Do you think your compiler can stand on it's own and doesn't follow GHC much, except for being a haskell compiler? By all means fork. Eta is a bit special here, Eta forked off, and basically started customising their Haskell compiler specifically to the JVM, and this also allowed them to make radical changes to GHC, which would not have been permissible in the mainline GHC. (Mainline GHC tries to support multiple platforms and architectures at all times, breaking any of them isn't really an option that can be taken lightheartedly.) Eta also started having Etlas, a custom Cabal, ... I'd still like to see a lot from Eta and the ecosystem be re-integrated into GHC. There have to be good ideas there that can be brought back. It just needs someone to go look and do the work. GHCJS is being aligned more with GHC right now precisely to eventually re-integrate it with GHC. Asterius went down the same path, likely inspired by GHCJS, but I think I was able to convince the author that eventual upstreaming should be the goal and the project should try to stay as close as possible to GHC for that reason. Now if you consider adding a codegen backend, this can be done, but again depends on your exact target. I'd love to see a CLR target, yet I don't know enough about CLR to give informed suggestions here. If you have a toolchain that functions sufficiently similar to a stock c toolchain, (or you can make your toolchain look sufficiently similar to one, easily), most of it will just work. If you can separate your building into compilation of source to some form of object code, and some form of object code aggregates (archives), and some form of linking (objects and archives into shared objects, or executables), you can likely plug in your toolchain into GHC (and Cabal), and have it work, once you taught GHC how to produce your target languages object code. If your toolchain does stuff differently, a bit more work is involved in teaching GHC (and Cabal) about that. This all only gives you *haskell* though. You still need the Runtime System. If you have a C -> Target compiler, you can try to re-use GHC's RTS. This is what the WebGHC project did. They re-used GHC's RTS, and implemented a shim for linux syscalls, so that they can emulate enough to have the RTS think it's running on some musl like linux. You most likely want something proper here eventually; but this might be a first stab at it to get something working. Next you'll have to deal with c-bits. Haskell Packages that link against C parts. This is going to be challenging, not impossible but challenging as much of the haskell ecosystem expects the ability to compile C files and use those for low level system interaction. You can use hackage overlays to build a set of patched packages, once you have your codegen working. At that point you could start patching ecosystem packages to work on your target, until your changes are upstreamed, and provide your user with a hackage overlay (essentially hackage + patches for specific packages). Hope this helps. You'll find most of us on irc.freenode.net#ghc On Fri, Mar 26, 2021 at 1:29 PM Clinton Mead <clintonm...@gmail.com> wrote: > Thanks again for the detailed reply Ben. > > I guess the other dream of mine is to give GHC a .NET backend. For my > problem it would be the ideal solution, but it looks like other attempts in > this regard (e.g. Eta, GHCJS etc) seem to have difficulty keeping up with > updates to GHC. So I'm sure it's not trivial. > > It would be quite lovely though if I could generate .NET + Java + even > Python bytecode from GHC. > > Whilst not solving my immediate problem, perhaps my efforts are best spent > in giving GHC a plugin architecture for backends (or if one already > exists?) trying to make a .NET backend. > > I believe "Csaba Hruska" is working in this space with GRIN, yes? > > I read SPJs paper on Implementing Lazy Functional Languages on Stock > Hardware: The Spineless Tagless G-machine > <https://www.microsoft.com/en-us/research/publication/implementing-lazy-functional-languages-on-stock-hardware-the-spineless-tagless-g-machine/> > which > implemented STG in C and whilst it wasn't trivial, it didn't seem > stupendously complex (even I managed to roughly follow it). I thought to > myself also, implementing this in .NET would be even easier because I can > hand off garbage collection to the .NET runtime so there's one less thing > to worry about. I also, initially, don't care _too_ much about performance. > > Of course, there's probably a whole bunch of nuance. One actually needs > to, for example, represent all the complexities of GADTs into object > orientated classes, maybe converting sum types to inheritance hierarchies > with Visitor Patterns. And also you'd actually have to make sure to do > one's best to ensure exposed Haskell functions look like something > sensible. > > So I guess, given I have a bit of an interest here, what would be the best > approach if I wanted to help GHC develop more backends and into an > architecture where people can add backends without forking GHC? Where could > I start helping that effort? Should I contact "Csaba Hruska" and get > involved in GRIN? Or is there something that I can start working on in GHC > proper? > > Considering that I've been playing around with Haskell since 2002, and I'd > like to actually get paid to write it at some point in my career, and I > have an interest in this area, perhaps this is a good place to start, and > actually helping to develop a pluggable backend architecture for GHC may be > more useful for more people over the long term than trying to hack up an > existing GHC to support 32 bit Windows XP, a battle I suspect will have to > be refought every time a new GHC version is released given the current > structure of GHC. > > On Fri, Mar 26, 2021 at 1:34 PM Ben Gamari <b...@well-typed.com> wrote: > >> Clinton Mead <clintonm...@gmail.com> writes: >> >> > Thanks all for your replies. Just going through what Ben has said step >> by >> > step: >> > >> > My sense is that if you don't need the threaded runtime system it would >> >> probably be easiest to just try to make a modern GHC run on Windows XP. >> >> >> > >> > Happy to run non-threaded runtime. A good chunk of these machines will >> be >> > single or dual core anyway. >> > >> That indeed somewhat simplifies things. >> >> >> As Tamar suggested, it likely not easy, but also not impossible. WinIO >> >> is indeed problematic, but thankfully the old MIO IO manager is still >> >> around (and will be in 9.2). >> >> >> > >> > "Is still around"? As in it's in the code base and just dead code, or >> can I >> > trigger GHC to use the old IO manager with a GHC option? >> > >> > The possible reasons for Windows XP incompatibility that I can think of >> >> off the top of my head are: >> >> >> >> * Timers (we now use QueryPerformanceCounter) >> >> >> > >> > This page suggests that QueryPerformanceCounter >> > < >> https://docs.microsoft.com/en-us/windows/win32/api/profileapi/nf-profileapi-queryperformancecounter >> > >> > should >> > run on XP. Is this incorrect? >> > >> It's supported, but there are caveats [1] that make it unreliable as a >> timesource. >> >> [1] >> https://docs.microsoft.com/en-us/windows/win32/sysinfo/acquiring-high-resolution-time-stamps#windowsxp-and-windows2000 >> > >> >> * Big-PE support, which is very much necessary for profiled builds >> >> >> > >> > I don't really need profiled builds >> > >> >> Alright, then you *probably* won't be affected by PE's symbol limit. >> >> >> * Long file path support (mostly a build-time consideration as Haskell >> >> build systems tend to produce very long paths) >> >> >> >> >> > I don't need to build on Windows XP either. I just need to run on >> Windows >> > XP so hopefully this won't be an issue. Although if GHC was modified for >> > long file path support so it could build itself with long file path >> support >> > presumably it will affect everything else it builds also. >> > >> If you don't need to build on XP then I suspect this won't affect you. >> >> > >> >> There may be others, but I would start looking there. I am happy to >> >> answer any questions that might arise. >> >> >> > I'm guessing the way forward here might be a patch with two options: >> > >> > 1. -no-long-path-support/-long-path-support (default -long-path-support) >> > 2. -winxp >> > >> > The winxp option shall: >> > >> > - Require -no-long-path-support >> > - Conflicts with -threaded >> > - Conflicts with profiled builds >> > - Uses the old IO manager (I'm not sure if this is an option or how >> this is >> > done). >> > >> The old IO manager is still the default, although this will likely >> change in 9.2. >> >> > What do you think (roughly speaking)? >> >> Yes, that is essentially correct. I would probably start by trying to >> run a 32-bit GHC build on Windows XP under gdb and see where >> things fall over. >> >> Cheers, >> >> - Ben >> > _______________________________________________ > ghc-devs mailing list > ghc-devs@haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >
_______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs