Hello.

GHC 7.6 now supports a "capi" calling convention which allows "function-like" macros to be invoked from Haskell code with "foreign import". To me this seems like rather odd design.

The advantage of pure ABI-level FFI specifications (the standard Haskell FFI) is that wrappers can be generated in pure assembly by the Haskell compiler itself and there is no dependence on the C language. In contrast, "capi" requires the wrapper to be produced by a C compiler since the wrapped macro can expand to an arbitrary C expression.

My question is: if the Haskell compiler is to support FFI wrappers that require a C compiler, why then are the wrappers so limited in functionality? The "capi" calling convention only supports function-like macros, i.e. ones that take expression arguments and expand into an expression. But there are many other kinds of macros, too: ones whose arguments are types or identifiers, and that may expand into a declaration or a statement. In fact, function-like macros are (or should be) a rarity, since inline functions serve the same purpose better.

So when a well-designed C API has public macros, these macros are usually _not_ function-like. But using these from Haskell still requires writing some wrapper C functions (or, now, wrapper macros!) manually. Not a big deal, but then again, writing wrappers in C for function-like macros wasn't a big deal previously, either.

So to me "capi" seems to buy very little, and it only supports strangely specific needs. If a Haskell implementation is to support FFI wrappers that must be compiled via C, a much more general solution would have been to allow arbitrary inline C code (instead of just the name of a macro) in the definition of the wrapper. I can't see that costing much more than the current design, and its benefits seem much greater.

So am I missing something here?


Lauri


_______________________________________________
FFI mailing list
FFI@haskell.org
http://www.haskell.org/mailman/listinfo/ffi

Reply via email to