On Wednesday, 19 December 2012 at 09:25:54 UTC, Walter Bright wrote:
Mostly. If you use bytecode, you have Yet Another Spec that has to be defined and conformed to. This has a lot of costs.

But those are mostly one-time costs, and for software that has to run millions of times over, if there are enough performance gains by first compiling to bytecode, it could be worth the costs over the long term.

If there may be better methods of producing he same or better results that are not strictly bytecode, then that's another story, however one goal is to have a common language that amalgamates everything together under a common roof.

One question I have for you, is what percentage performance gain can you expect to get by using a well chosen bytecode-like language verses interpreting directly from source code?

The other question, is are there better alternative techniques? For example, compiling regular source directly to native using a JIT approach. In many ways, this seems like the very best approach, which I suppose is precisely what you've been arguing about all this time. So perhaps I've managed to convince myself that you are indeed correct. I'll take that stance and see if it sticks.

BTW I'm not a fan of interpreted languages, except for situations where you want to transport code in the form of data, or be able to store it for later portable execution. LUA embedded into a game engine is an good use case example (although why not D!).

--rt

Reply via email to