I read somewhere that the cable lengths were expressly engineered to provide 
that signals arrived to chips at nearly the same time so as to reduce chip 
“wait” times and provide more speed. 

So that begs a question. Older chips like the Z80 and 8080 lines required other 
support chips that added latency to a system waiting for the support chips to 
“settle”.  Does that imply that newer microprocessors that have support on the 
chip are just generally faster because of that?


Sent from my iPhone

> On Apr 22, 2024, at 12:54, Chuck Guzis via cctalk <cctalk@classiccmp.org> 
> wrote:
> 
> On 4/22/24 12:31, ben via cctalk wrote:
>> 
> 
>> 
>> Classic cpu designs like the PDP-1, might be better called RISC.
>> Back then you matched the cpu word length to data you were using.
>> 40 bits made a lot of sense for real computing, even if you
>> had no RAM memory at the time, just drum.
> 
> I'd call the CDC 6600 a classic RISC design, at least as far as the CPU
> went. Classes were given to programming staff on timing code precisely;
> I spent many happy hours trying to squeeze the last few cycles out of a
> loop (where the biggest bang for the buck was possible).
> 
> I think bitsavers (I haven't looked) has a document or two on how to
> time code for that thing.
> 
> --Chuck
> 
> 

Reply via email to