If you looked at the actual machine code all those many years ago when I was 
doing some piddly programming in C+, until it was optimized it was full of dead 
space filled with no-op opcodes (do nothing, skip to the next opcode) since the 
programming language set aside large amounts of data space for all variables.

Microsoft programs in those days were notorious for size for this reason, very 
poorly optimized, and as a result dead slow since the processor spent most of 
it's time loading no-ops.  Almost as stupid as the sequence so often found of 
"mask interupts" followed by "halt processor" that none of their programming 
languages filtered out.  That sequence turns off the mouse and keyboard and 
stops the processor, only recourse is to power cycle to reboot.  

The theory was, like "reduced instruction set" processors, that it used fewer 
clock cycles to execute an endless string of no-ops  than it took to write 
decent code that ran far fewer instructions.  Sloppy programming at the very 
best, digital malpractice is more like it.

I never did understand how using programming space to write the complex 
operations of CISC processors was any faster by using a RISC  system, unless 
the CISC chip was from Intel and took half an hour to change stacks or 
something.  Off-loading the complex operations onto the programmer is going to 
cause more trouble (and errors) than complex instructions executed in hardware 
on the chip, especially task switching.

I'm not sure Microsoft ever did figure that one out, they had to buy Unix 
instead.
_______________________________________
http://www.okiebenz.com

To search list archives http://www.okiebenz.com/archive/

To Unsubscribe or change delivery options go to:
http://mail.okiebenz.com/mailman/listinfo/mercedes_okiebenz.com

Reply via email to