On Sun, Aug 6, 2023 at 4:31 PM <immortal.discover...@gmail.com> wrote:

> I agree with Matt. I still think AI can be worked on with a small dataset/
> compute. I think upsizing the model is only for selling/ using it for real
> and not checking how efficient your new code is.
>

Given that Sara Hooker was just about the only influential voice addressing
the path dependency/bandwagon effects that damage *research* in machine
learning -- when everyone is in a gold-rush of *engineering* -- it is
really tragic that she got seduced into a position where her talent is
being, if not wasted, then at least could have been as well served by
someone else in that position.  The world really needs to hear the message
she tried to set forth in The Hardware Lottery and only partially succeeded
in doing even then.  I say this as the only person circa 1990 to have a
product applying convolution hardware to neural network images circa 1990
-- way ahead of GPUs that put the lie to the SVM supremacy illusion -- and
that just barely because I'd just gotten through an SAIC "pig fuck" project
(imminent nuclear war acquisition authority) -- where I wrote firmware
controlling DataCube's finite impulse response video processing hardware.
There was a point where at the second IJCNN in San Diego my company caught
the attention of ONR China Lake and could have demonstrated what GPU
hardware would demonstrate 15 years later -- that appropriately quantized
and pruned parameters could beat SVMs -- but some PhD asshole stepped in
and said it was nonviable.  So everyone took off in the wrong direction for
15 years.

It's happening again with LLMs.

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T469692845b7d2d7e-Mca14764ce838f2c12a40c32a
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to