On 8/23/2019 12:00 PM, Paul Koning via cctalk wrote:


On Aug 23, 2019, at 1:47 PM, Noel Chiappa via cctalk <cctalk@classiccmp.org> 
wrote:

From: Jon Elson

On 08/22/2019 12:47 PM, Tom Uban via cctalk wrote:

On a possible related note, I am looking for information on converting
CISC instructions to VLIW RISC.

I think it might end up looking a bit like the optimizers that were
used on drum memory computers back in the dark ages.

I dunno; those were all about picking _addresses_ for instructions, such
that the next instruction was coming up to the heads as the last one
completed.

The _order_ of execution wasn't changed, there was no issue of contention
for computing elements, etc - i.e. all the things ones think of a
CISC->VLIW translation as doing.

Instruction ordering (instruction scheduling) is as old as the CDC 6600, though 
then it was often done by the programmer.

An early example of that conversion is the work done at DEC for "just in time" conversion 
of VAX instructions to MIPS, and later to Alpha.  I wonder if their compiler technology was 
involved in that.  It wouldn't surprise me.  The Alpha "assembler" was actually the 
compiler back end, and as a result you could ask it to optimize your assembly programs.  That was 
an interesting way to get a feel for what transformations of the program would be useful given the 
parallelism in that architecture.

        paul

Why bother is my view. The problem is Three fold, a) The hardware people keep changing the internal details. b) A good compiler can see the the original program structure and optimize for that. c) The flat memory model as from FORTRAN or LISP where variables are random over the entire memory space scrambles your cache.

With that said if you can make the optimization in defined some sort of MACRO format changing parameters would be simple and be effective unseen changes. Kind of the the early Compiler Compilers.


I see RISC as emulation of the HARVARD memory model.
A Harvard model would not take much change in programing other than not having a "SMALL" mode. Two 32 bit wide buses (data) (program) could
be faster as external memory is more drum like with filling of caches
rather than random memory than one large data path doing everything.

I still favor the CLASSIC instruction set model. OP:AC:IX:OFFSET
Core Memory made the machines slow with the memory restore cycle, Giving rise to CSIC like the PDP 11 to give a better use of that dead cycle.
RISC is only fast because of the PAGE cycle of dynamic memory at
the time.

Too bad everything is all 8/16/32/64+ computing or say a 36 bit classic
style cpu design could run quite effective at a few GHZ.
Ben.








Reply via email to