I agree, Ian,

A few of us at I.P. Sharp Associates ported SHARP APL
to the PC when that came out. When IBM announced the
XT/370 expansion card for the PC/XT, we snagged a few
of them, probably with help from Lisa Fincato, our IBM
sales rep. And then, we got the AT/370 card, which was almost
entirely usable as an APL system.  SHARP APL on the AT/370 ran at
about the same speed as an IBM 360/40 mainframe,
so it definitely represented a threat to IBM's big iron business:
The cards were expensive to purchase, but probably
ran about the same price as a one-day rental of a 360/40.

IBM could have cranked the entire PC business into a
370-compatible architecture, but the bean counters ensured
that it was hidden under as many baskets as they could find.
The 370 architecture of that day would still have been a
far superior system to the rubbish X86 "designs" that we have
now. Oh, Atlantis!

I may still have one of those cards kicking around, and
I donated another to the Canadian Computer Museum
at York University. It was still working when I last saw it,
running SHARP APL/PC370 (name?) thanks to an assist from Bill Kindree, and support from Dr. Z at York U.

We took one of our AT/370 (or maybe it was just SHARP
APL/PC, running our hand-crafted S370 emulator. Not sure...)
systems to APL86 in Manchester, UK,
and demonstrated it at the IPSA booth there.
I had earlier designed* fast algorithms for inner products
on SHARP APL, and proceeded to race our PC interpreter
on that against Jim Brown and his APL2 (dialup connection)
system. My STAR algorithm did (at least) 32 bits at a time,
so those inner products ran about 1000X faster than previously.

I showed Jim something like +/,M∨.∧⍉M←1000 1000 ⍴0 1
and it took maybe seven seconds. He then tried it on his
gonzo Big Iron, and after waiting a few minutes, gave up,
but could not break out of execution of the expression, so
hung up the phone. He tried again later, with the same result,
only to receive word from the data center operators to
please stop what he was doing, because he had crashed their
entire system. Twice.

Good algorithms win over tin.

Bob

* "Designed" is one of those computer words, akin to T.S Eliot's:
   "good writers borrow, great writers steal."

   A few people at IPSA (I was not among them, alas. My days
   in supercomputing lay ahead.) implemented a STAR APL,
   an APL interpreter for the CDC STAR-100 supercomputer,
   then being designed and built just outside Toronto.
   This machine had a
   memory-to-memory architecture (no registers, vector or
   otherwise), as was fairly common at the time (IBM 1620,
   IBM 1401). It took a long time for a STAR instruction to
   get started, but it ran at a very good clip then, much like
   typical APL interpreters.
   Hence, just like good APL code, good STAR code
   encouraged minimizing instruction counts to get more
   results per op by vector ops.

   The crew implementing STAR APL realized that
   a row-column scalar inner product was not going to work
   well, so somebody (I don't know who, but would like to find
   out, so that I can give them credit in the future...) tweaked
   the computation loop order so that, for Z←⍺ F.G ⍵

     - each element of ⍺ was fetched exactly once
     - that element would be applied against an entire row of ⍵,
       scalar-vector:
          t←⍺[k;m] G ⍵[i;]
     - the resulting vector, t, would be applied vector-vector:
          t2←Z[j;] G t
       If the STAR was like other CDC/CRAY architectures,
       it hardware-fused those two ops, so never actually generated t.
       [They had a phrase to describe that generalized fusion
       capability, but I can't remember what it was called.]

       Alas, the STAR's Achilles' Heel was the slow startup
       time for instructions. This meant that it worked great
       on big arrays, and poorly on small ones. [Does this
       sound like an APL interpreter?] Hence, later CDC/CRAY
       architectures had much-improved scalar support.

       Back to Booleans: I was being pestered by one of
       the deep-pocket IPSA customers to "fix" the dismal
       performance of inner product when one of the arguments
       was Boolean. I remembered the STAR APL algorithm,
       and realized that it could enable a few Good Things:

           1. Application of G could work a row of ⍵ at a time,
               so for common G (∨ ∧ ...), we could compute
               32 bits of that with one instruction (Booleans
               are stored one-bit per element, in ravel order.).

            2. Application of F often would also allow operating
                on 32 bits at a time.

            3. If an element of ⍺ is an identity for G, we can
                skip that computation step, and just use the row
                of ⍵.

            4. Similarly, if an element of ⍺ is a "zero" for G,
                AND if that "zero" is a left identity for F, we can
                skip both computations.

         And so on. The cost of the checks in 3 and 4 are
         amortized over an entire row of ⍵, giving us superior
         performance on densely stored sparse arrays.

         Fast skips over elements of ⍺ are trivial to implement.

        I think this was at the time when we were making substantial
        interpreter changes for Hitachi, so a bit of good work
        on inner product fit well into the picture. I convinced dba (David
        Allen) to do the low-level design and implementation,
        with wonderful results.

Bob

On 2021-04-13 10:55 a.m., Ian Clark wrote:
They misunderstood the PC. They thought it was just a toy and ignored it.
Not as I recall. Mainframe division understood it all too well. They fought
like hell in the early 80s to stop it happening. And to stop microcomputers
(the PC wasn't the first, or – as Bill Gates pointed out – the best)
driving out the IT dept from banks and insurance companies, the main milch
cows.

Others in the company saw the victory of micros as inevitable, and wanted a
slice of the action. So they set up Boca Raton behind a Chinese Wall. I
think their battle cry was: No EBCDIC!

The counter-arguments were quite persuasive (e.g. dispersal of the
expertise concentrated in IT depts, so employees would get all these
wonderful PCs but never learn how to use them) - but not persuasive enough,
and their shock-horror projections all came to pass.

Including the ill effects for customers. Wall-to-wall Excel has not been an
unmitigated success.

As for what happened next, I recommend Lou Gerstner's book: *Who Says
Elephants Can't Dance?* Every old-school IBMer's darkest nightmare: a
customer takeover. Lou even got IBM selling chips as a commodity.

Well… you don't get rich selling clothes-pegs to gypsies.

On Tue, 13 Apr 2021 at 13:59, Don Guinn <dongu...@gmail.com> wrote:

They misunderstood the PC. They thought it was just a toy and ignored it.

On Mon, Apr 12, 2021, 7:09 PM Björn Helgason <gos...@gmail.com> wrote:

When i was a product manager there I was told that when we wanted to sell
something the selling price should deliver at least 10 times more than
cost.

Less than that they were not interested.

We are not in the fingers and toes business I was told.

That was 30 years ago.

They have been going downhill ever since I left.

Þann mán., 12. apr. 2021, 18:25 'Rodney Nicholson' via Chat skrifaði <
c...@jsoftware.com>:

“ 13 layers of managers.”

The explanation of their survival is, I believe, their huge profit
margins.

I still recall when they got a contract to electronically handle the
Toronto Stock Exchange trading system where they charged $18 per
transaction.  Their cost of course was just a few electrons per
transaction.
They were in effect a monopoly at the time.  And monopolies always
waste
huge quantities of resources, accordingly reducing everone’s living
standards.

Rodney.


On Apr 12, 2021, at 10:15 AM, Björn Helgason <gos...@gmail.com>
wrote:
apl lives on even if ibm goes away.

it is really amazing that ibm is still around.

13 layers of managers.

Þann mán., 12. apr. 2021, 13:51 Raul Miller skrifaði <
rauldmil...@gmail.com
:
That's disappointing.

Not surprising -- just disappointing.

Still, there's J, there's Dyalog APL, there's GNU APL, and there's k
and q.
Not to mention various hardware array concepts, such as greenarrays
and
gpus.

And, maybe, IBM will go back up at some point?

Who knows...

--
Raul

On Mon, Apr 12, 2021 at 5:05 AM Björn Helgason <gos...@gmail.com>
wrote:
https://www.ibm.com/support/pages/apl2-whats-new

----------------------------------------------------------------------
For information about J forums see
http://www.jsoftware.com/forums.htm
----------------------------------------------------------------------
For information about J forums see
http://www.jsoftware.com/forums.htm
----------------------------------------------------------------------
For information about J forums see
http://www.jsoftware.com/forums.htm
----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

--
Robert Bernecky
Snake Island Research Inc
18 Fifth Street
Ward's Island
Toronto, Ontario M5J 2B9

berne...@snakeisland.com
tel:       +1 416 203 0854
text/cell: +1 416 996 4286


----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to