On Thu, 6 May 2021 at 02:19, Jules Richardson via cctalk
<cctalk@classiccmp.org> wrote:
>
> I seem to recall an anecdote about Acorn hooking up the first prototype
> ARM-1 processor and it working, despite showing no current draw on the
> connected ammeter - it then transpired that the power supply was still
> switched off,  but it was so efficient that it was able to run via leakage
> current on the connected I/O lines.

Oh yes indeed.

Sophie Wilson did a talk at the last ROUGOL meeting, last month. Not
sure if the video is online yet. I can share it here when it's up if
folk are interested. There are a few stories of bring-up of the first
ARM chips that were remarkable.
* The unanticipatedly low power draw;
* Simulating the instruction set in BBC BASIC on a BBC Micro during
the design phase, instead of some vastly-expensive bigger system;
* The fact that the very first silicon worked very nearly perfectly first time;
* Embedding a `MANDEL` instruction into the BASIC for demos, because
people expected it so often that it was worth implementing someone's
algorithm in ARM code and putting it directly into the interpreter.

She's really quite pessimistic about the future of microprocessor manufacture.

She gleefully made fun of Intel quite a bit, but pointed out the
[cough] _inaccuracies_ and shall we say _over-enthusiastic_ claims of
their marketing materials concerning die size.

She's quite impressed with AMD's recent designs, which partition parts
of multicore processors into separate dies, so that only the bits that
really need it can be made on the very expensive tiny feature size
processes, and other bits on cheaper, larger feature sizes.

Oddly she barely mentioned Apple Silicon, merely that they were
extremely aggressive with wide superscalar designs and so on.

Her overall point being that although it _is_ possible for transistor
sizes etc to get a _little_ smaller than is mainstream now, the
industry is now at the point where smaller-feature-size, faster chips
are actually getting _more_ expensive to make, not less. We are very
nearly at the end of the line of high-end chips getting faster at all.

Part of this is due to language design. I noticed myself that in
recent decades, chips seem to be designed to run C fast, to put it
simply. Wilson feels that the instruction model of C is now a limiting
factor, and that one of the only ways to go is something akin to her
own FirePath processor design for Broadcom.

Firepath is more or less the "Son of ARM". Not much is public about
Firepath and this is one of the best references I know:
https://everything2.com/title/FirePath

It can do things like load 8 different bytes of data into a set of
registers, perform arithmetic on them, and depending on the result,
put the results back somewhere else or not, in a single assembler
opcode in a single cycle... and she feels that no contemporary
high-level language can usefully express such operations.

I suppose APL might come closest, but it's hardly mainstream.

I find it an interesting thought that once the only way to get more
performance will soon be to switch to radically different processor
architectures that always work in ways very loosely comparable to MMX
or Altivec (and their descendants), and write new programs in new
languages on new OSes that can exploit deep hardware parallelism.

The flipside of the coin may be that current, "traditional" designs
will get smaller and cheaper and use less power, and the only way to
squeeze better performance out of them will be to use smaller, simpler
OSes. There's a chance here for what the mainstream sees as obsolete
or irrelevant OSes and languages to enjoy a revival. The vanguard
could be VMS.

I'd love to see this.

I found it amusing after my FOSDEM talk in February, which talked
about this. I was talking about some of the ideas on the Squeak
Smalltalk mailing list. I listed a set of criteria I'd been using to
narrow down my selection of criteria:
• a clean, simple OS, with SMP support, that supported pre-emption,
memory management etc.
• in a type-safe language, with a native-object-code compiler — not
JITTed, not using a VM or runtime
• and a readable language, not something far outside the Algol family
of imperative HLLs
• that was portable across different architectures
• that was FOSS and could be forked
• that was documented and had a user community who knew it
• that can be built with FOSS tools (which RISC OS fails, for instance)
• which is or was used by non-specialists for general purpose computing
• which can usefully access the Internet
• which runs on commodity hardware
• which does not have a strongly filesystem-centric design that would
not fit a PMEM-only computer (i.e. not an xNix)

... and several people went "no, that is impossible. Match all of
those at once and you have the null set.

And I said, no, this is why I picked Oberon and A2. The result seemed
to be a number of people who hadn't been paying much attention sitting
up and asking what language/OS this was.

It was similar to trying to summarise what I'd learned about Lisp to a
Unix community in Another Place a few years ago. It all washed over
them until I quoted observations such as Alan Kay's "Lisp is the
Maxwell's Equations of programming languages", which made a few people
suddenly wake up and go read the links I was providing.

Some stuff may yet come back from obscurity. I hope. :-)

-- 
Liam Proven – Profile: https://about.me/liamproven
Email: lpro...@cix.co.uk – gMail/gTalk/gHangouts: lpro...@gmail.com
Twitter/Facebook/LinkedIn/Flickr: lproven – Skype: liamproven
UK: +44 7939-087884 – ČR (+ WhatsApp/Telegram/Signal): +420 702 829 053

Reply via email to