Le 02/03/2016 20:21, Ben Coman a écrit :
On Wed, Mar 2, 2016 at 6:20 PM, Thierry Goubier
<thierry.goub...@gmail.com> wrote:
Le 02/03/2016 11:07, Ben Coman a écrit :

Back to Dimitris' original question, I wonder that the potential speed
benefit comes not so much from using "C", but from leveraging a large
body of work on optimizing passes at the lower level.  I found a
tutorial [1] to produce a toy language "Kaleidoscope" that generates
LLVM IR such that "by the end of the tutorial, we’ll have written a
bit less than 1000 lines of code. With this small amount of code,
we’ll have built up a very reasonable compiler for a non-trivial
language including a hand-written lexer, parser, AST, as well as code
generation support with a JIT compiler."


What is costly in the Pharo space is the fact that Kaleidoscope rely on the
LLVM C++ infrastructure to generate the IR. And this linking to C++ code and
classes is hard to do.

Are you referring only to FFI interfacing versus C++ name mangling,
for which "the C bindings in include/llvm-c should help a lot, since
most languages have strong support for interfacing with C.[A]"   or
something more?

No, just the fact that the IR is quite complex (have a look into the llvm IR reference), so you use a context and a builder (Builder.CreateFAdd) in the C++ code (and I think the Ocaml llvm module is probably large since it recreates the structure).

The naming scheme of LLVM is quite complex and unobvious as well (the %0, %1, etc...) with strange rules biting you if you don't number the same way as LLVM does when you use its API.

Thierry

[A] http://llvm.org/releases/3.1/docs/FAQ.html

cheers -ben




Reply via email to