> Erik Meijer <[EMAIL PROTECTED]> wrote:
^^^^^^^^^^^^^^
Yes, that refers to the computer lab sponsored by MSR Cambridge
and MS Netherlands in Utrecht.
> > The plan is to have the release out the door by September 1st.
>
> Will that release support Haskell, or just Mondrian?
The compiler hooks into GHC by translating Core into GOO
and then after some source to source transformations it
can spit out either C# or Java.
> I'm afraid I have to disagree on that point. Basically you translate
> quite a few things by implementing your own virtual machine on top of
> the .NET runtime. For example, argument passing is done with explicit
> pushes and pops on your own virtual machine stack, rather than by
> using the .NET runtime's argument passing. This approach makes the
> compiler is fairly simple, but the generated code is not what _I_
> would call elegant.
As I explained the translation has to implement the features that the
.NET runtime does not provide itself. Hence, must implement lazy evaluation
and you must implement currying yourself.
The same is true for Mercury. Since the .NET runtime does not support
backtracking, you are simulating that by passing extra continuation
arguments around.
> Don't you currently encode tail calls too?
By exploiting tail-calls, which we don't do yet, you can use
the underlying argument passing mechanism if you call a known function
by calling one of its fast entry points.
So you don't pay for currying and HO functions if you don't use them.
The same is true for laziness btw.
> And what about type classes and polymorphism?
By the time you reached Core, type classes are compiled away
by the dictionary translation. To implement polymorphism for Haskell
there is no need to carry around types at runtime.
> Also, as I understand it, Haskell/Mondrian programs that don't make use of
> currying -- e.g. those in which all functions have only one argument --
> still get encoded, rather than being mapped directly. So the encoding
> is not done in way that you only pay for it when you use those features.
> This makes interoperability with other languages more difficult.
Not true. It is extremely easy to interoperate with other languages.
Yes, you do have to do some marshalling and unmarshalling, but again,
you *must* do that to match up the semantic differences between
the language of the caller and the callee.
To call out from Haskell, you use a trivial wrapper function that
pops the arguments from the stack (which is there
to support currying), evaluate them to WHNF (because in Haskell
the caller does not do that) and then directly calls the external function.
In the current implementation there is no need to unmarshall the result.
To call into Haskell, you use a trivial wrapper function that pushes
the arguments on the stack and calls the underlying Haskell function.
Or you call the function directly trough its fast entry point.
> Hmm, the "full" adjective here sounds like it might be a bit of an
> overstatement. Does that mean that I can call Mercury code from
> Haskell, have it all statically type checked, and not have to write
> any additional interface definitions or glue code? Including
> procedures which are polymorphic and/or use type classes?
> If so, I would be very surprised! I think the current level of interop
> supported is still a LONG way from what I would describe as "full"
> interoperability.
Come on Fergus! You will never achieve interoperability on that level
except when your semantic domains match exactly or when you have an
extremely complicated intermediate language. For example, most
languages don't make the distinction between values and computations
as Haskell does, and I don't expect or want for example VB programmers
to do that either. But it does mean that whenever I call a VB function,
I must write a little glue code to compensate for the difference in
semantics. The same is true for calling nondeterministic or multi-moded
Mercury predicate.
With "full" I mean that modulo the required semantic impedance matching
you can call
> So, could you elaborate on what sense you mean when you say we have
> "full" bidirectional interop?
> For example, which of the various CLS (Common Language Specification)
> categories does the Haskell and/or Mondrian implementation comply with?
> In addition, I think your compiler also needs to support attributes?
Aha, finally you hit a real issue. To be a CLS extender would require
a lot of language extensions to a lot of languages, even to OO languages
like Java! But for Haskell the situation is even more severe. Because
Haskell does not have a notion of class (as in C#), there is no way
that you can expect that Haskell can extend a class. Yet again, an
extra level of indirection does the trick Since you can call Haskell
from say C#, you just write a little wrapper class in C# whose methods
are implemented in Haskell. On the Haskell side you call its constructor
passing it the methods as arguments. This is essentially what we did for
classic COM (where you construct a vtable of dynamically exported
function pointers), and for Lambada (Haskell-Java interface via JNI),
and works fine. You can do a similar thing for transactions,
ie provide a higher order function that makes a Haskell function into
a transactional function.
> Actually that's not really true; compiling to IL is not enough.
>
> To support ASP+ for a given language, you also need to implement a
> certain API. Now my information on this is mostly second-hand,
> but as I understand it, this API pretty much assumes that your
> language is a fairly typical imperative language, with constructs
> like loops, etc. So it is fairly easy to implement this API for
> imperative languages, but not nearly so easy to implement it for
> functional languages or other non-traditional languages.
That is right. The thing here is to use
the "code behind" style ASP pages.