On Mon, Aug 26, 2013 at 9:12 AM, Bennie Kloosteman <[email protected]>wrote:

>
> If you think you will create or await a new jit runtime then any design
> decision that requires a feature not supported by  the CLR can wait till
> the the new runtime is created / built first ... which should be a huge
> time saving ;-p
>

Quite the opposite. I know of a number of groups that are starting to look
at next-generation runtimes and compiler infrastructures. The designs of
these things are very conservative. Because of the scale of work involved,
it tends to be the case that they consider only proven concepts and
technologies. Proven doesn't have to mean that it's running in millions of
user sites. Proven means that we know how to do it technically and there is
enough demonstrated benefit to justify inclusion of the technology or
concept.


> Im not really convinced by the JIT model anymore ..the promised
> optomizations seem to have stalled or require so much time to work out that
> its inpractical  to use in a JIT .
>

I'm not sure what promised optimizations you refer to. The original JITS
merely promised that compiling was better than interpreting, which it
generally is. Later ones played with trace-driven inlining and
optimization. The payoff for that was less clear. Then you get to hotspot,
which is trying to do *value* driven optimizations. That's a line whose
benefit has always seemed dubious to me, and my opinion on that goes all
the way back to the Ungar's early work on self. It seemed to me then that
the desire for value-driven optimization was motivated by Self's refusal of
static typing, and that this was a case of one bad decision mandating a
second one to justify the first one. Robert McNamara would have approved.

But on a less snarky note, I'm not aware of any work on static
metacompilation in such languages. There are definitely cases where
value-driven optimization is a big win, but I haven't seen any work that
would clearly tell me *which* cases those are. I'm inclined to wonder
whether these are the cases that metacompilation handles well.

Just to be clear, I *do* know that the hotspot technology can do things
that metacompilation cannot. The bit I'm not clear about is what the real
payoff is. Metrics, when published, talk about program speedup, but less
often about the program slowdown while resources are devoted to
optimization. As we showed pretty clearly in the HDTrans project, it can
easily be the case that improving the performance of the underlying dynamic
translation system erases the benefit that subsequent optimization gives
you. There are a lot of moving parts, so these things aren't easy to
analyze.


> That said the existing C++ could go onto the CLR via managed C++ very
> quick ...
>

That hasn't been the experience of people trying to port it. :-)


> Regarding LLVM .. not tracking type information for registers - does any
> decent compiler after it goes through all the optimnization stages?
>

All of the good compilers for managed languages do, and there is no reason
not to. At any point where a compiler is introducing a temporary it is
sitting there with an expression tree that it is trying to compute. That
expression tree, of necessity, has a known result type. It's very little
trouble to add a type argument to the "make a temporary" procedure.


>  Mono uses LLVM so how critical is it ?
>

Since Mono performance sucks, I'd say it's pretty critical.


shap
_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev

Reply via email to