On Wed, Jul 29, 2020 at 01:56:05AM -0700, zeRusski wrote:
> This is a really cool piece of history! Thank you.
> 
> I'll admit I'm somewhat fuzzy here - it maybe a bit too meta for me or 
> perhaps I don't quite understand what you're trying to say. Isn't 
> interpreting n levels deep also linear in n?

Answer 1.

Let's say, for example, that it takes 10 instructions executed in an 
interpreter to decode, take decisions, and execute an interpreted 
instruction.  (In real life this varies a lot between interpreters and 
between interpreted instructions but let's keep the example simple.)
(and, also to keep things simple, let's say the interpreter in 
interpreting the machine language of the machne it's running on -- 
not an unusual technique in a debugger.)

Then interpreting a program using an interpreter running on hardware 
would take ten times as long as executing the program on the same 
hardware.

And likewise, interpreting the program in an interpreter being 
interpreted by the interpeter running on the hardware would take another 
ten times as lng as just interpreting the program with an interpreter 
running on the hardware.  

Thus 10 x 10, times as slow; this is where the exponential comes in.

> Only difference between the 
> two approaches  I see is that compiler lets you persist the fruits of its 
> labor so you don't have to start from scratch every time. Couldn't you 
> accommodate this with an interpreter (with some effort) although at this 
> point it becomes a compiler I suppose. Definitely fuzzy here.

That is exactly the difference between a compiler and an interpreter.

Answer 2:

Time to muddy the situation.

There are mixed-style language implementations.

If you only execute each piece of code once, interpreters tend to be 
faster.  But if you are to execute code many times, it's faster to 
compile.  It takes time to compile, but you get it back by saving time 
in later executions.

So what thee mixed implementations to is to interpret the code, but 
keeping track how often each piece of code (such as a function) is 
called.

When the count reaches a particular threshold, it pauses interpretation 
and compiles that code, the compiled code to be used thereafter.

There is some time penalty over compilation, because you waste time 
interpreting functions several times before you compile them.
This is offset by not having to compile code that is used only a few 
times.

The time behaviour of this kind of system overall is more like a 
compiler than an interpreter because the code that is executed the most 
does eventually get compiled and the rest gets interpeted only a limited 
number of 
times.

However, if it were to execute a copy of its own code, it would have to 
recompile it, unless it had a mechanism to check if the newly input 
program is the same as one it has already compiled.  This isn't 
impossible, but is not usually done.

> 
> > But when going n levels deep, the total execution time with a compiler 
> > is linear in n, and with an interpreter it's exponential. 

I use this criterion to tell whether a particular implementation is more 
like an interpreter or a compiler.

> >
> > That makes interpreting interpreters impractical when n gets large (even 
> > with n around 3 or 4); whereas compiling compilers can be done even for 
> > larger n. 
> >

Answer 3.

On modern machines, it is also important to keep memory demands low.  
Accessing large amounts of memory regularly tends to push other data our 
of cache, or even out of RAM onto disk.

Conserving storage is a common reason for using interpreted bytecode as 
the target language for a compiler.  The bytecode is usually smaller 
than the machine language.  If the bytecode interpreter is small enough, 
this is a definite win.  Bytecode was first used, as far as I know, on 
machines with small memories -- about 64K RAM total or even less.

Byte-code can also be portable.  You just need to write a (usually 
amall) bytecode interpreter for each new machine.

Of course it's still possible to, at run-time, compile the most-used 
byte-coded functions in to actual machine code to trade memory use for 
execution time.

Answer 4:  An example:

The language FORTH used a *word*-code interpreter instead of a 
byte-code interpreter, each word being two bytes, and containing the 
address of the interpreter's machine-code routine that implemented that 
instruction.

This meant each word representing an instruction could be 
executed in the interpreter by a hardware function call to an 
indirect address.  In fact, user-coded functions could be called the 
same way -- each would be a sequence of addresses preceded by the 
machine code that invokes the word-code interpreter. 

Still very compact on a machine with two-byte addresses.

Utterly impractical on a modern machine with 64-bit addresses, where the 
machine code for an operation can often be smaller than a machine 
address.

-- hendrik 

I hope this set of answers clarifies the distinction between an 
interpeter and compiler, how the distinction gets blurred in practice, 
and what the criteria are for choosng between them.

-- hendrik

> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Racket Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to racket-users+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/racket-users/2f288d86-d90c-4881-b001-389b580fece8o%40googlegroups.com.

-- 
You received this message because you are subscribed to the Google Groups 
"Racket Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to racket-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/racket-users/20200729154310.dol2c2ks3d6kome6%40topoi.pooq.com.

Reply via email to