On 10/1/2024 5:50 AM, John Clark wrote:
On Tue, Oct 1, 2024 at 7:23 AM PGC <multiplecit...@gmail.com> wrote:
/> I don't care what my statements sound like. It's about the
argument. I'm not making statements like "superintelligence is
around the corner",/
*I would maintain it's physically impossible to overhyped the
importance of artificial intelligence.*
/> in which case the burden of proof lies with those hyping those
statements./
*There is no burden, things are just heading in an inevitable
direction and, short of starting a thermonuclear war, nobody is going
to be able to stop it. *
/> The exchange with Brent is instructive: can a human level
intelligence be separated from its arguable 3.5 billion year history?/
*Yes.
*
So far it has not been. It has mostly been looking up what human level
intelligence has discovered. That's certainly intelligence, even
super-intelligence of a sort. But whether more is really different
remains to be seen.
/> Wouldn't that have to be accounted for? /
*No. *
/> If the current state of development is any indicator, where
they keep enlarging the mathematical linguistic context which
informs the response, then that's a lot of data for just one AI,
even if you argue that early stages of the planet are not necessary./
*True that's a lot of data, but I don't see your point. For over a
decade the amount of computational ability that an AI has at its
disposal has been doubling every six months, that's considerably
faster than Moore's law and there is no indication that's gonna stop
anytime soon. And that's not all, due to improvement in software,
improvements largely caused by the AI themselves not the humans who
have only a hazy understanding about what's going on, every 8 months
an AI that uses only half the computational power can reach the same
AI benchmarks. *
What are these, "improvements largely caused by the AI themselves"?
/> And then superintelligence demands something like "can
accomplish arbitrary tasks/problems much better than a human
and/or all humans". The only phenomenon that has reached that
level that we have evidence for is the development of civilization
and science by billions of lifeforms reaching humans over, taking
your figure, 500 million years./
*True. And that is precisely why I say it is physically impossible to
overhype the importance of AI. *
/> And to demonstrate that somebody is on the path towards
modelling and/or surpassing that, you'd need to show how./
*That will never happen,even in this very early stage nobody has a
detailed understanding of how AI's work.
*
Yet you're sure that they will continue to improve at the same rate as
the recent leap based on LLMs. I think that's the very definition of
"over hyping".
**
/> I not sure adding verbal/mathematical memory suffices./
*By contrast I am very sure of that. As I have already shown, it can
be proven with mathematical precision that the upper limit to the
amount of information needed to make _an entire human being_ is only
_750 megs_, *
Of course that's a human being that can't even speak or walk. How much
more information did it take to make you?
*and the algorithm that humans use to extract knowledge from their
environment must be much much smaller than that, probably less than 1
MB. *
Unless they use a computer.
*There have been important developments in the field of AI such as the
invention of transformers, but that only advanced things by a couple
of years, the primary reason we didn't have AI's like we have today in
the 1960s is that back then the hardware simply wasn't able to provide
the needed amount of computation. Frank Rosenblatt invented the
Perceptron way back in 1957 and its basic architecture was similar to
what we use today, but it couldn't do much because Rosenblatt's
hardware was pathetically primitive and agonizingly slow. *
*
*
*I recently watched an old Nova documentary about AI from the 1970s on
YouTube and a guy said that to develop an AI we need an Einstein, or
maybe 10 Einsteins, and about 1000 very good engineers, and it's
important that the Einsteins come before the engineers. But it turned
out all we needed was the engineers, Einstein was unnecessary.*
Which already makes one suspect that the improvement may just be a
matter of scope and speed and will reach a ceiling well below Einstein.
Brent*
*
*
*
John K Clark See what's on my new list at Extropolis
<https://groups.google.com/g/extropolis>
eun
h
--
You received this message because you are subscribed to the Google
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/everything-list/CAJPayv1TALebSdvmCuE_CwadWFHgsthbUKWoAEdOAtJ1LWntMA%40mail.gmail.com
<https://groups.google.com/d/msgid/everything-list/CAJPayv1TALebSdvmCuE_CwadWFHgsthbUKWoAEdOAtJ1LWntMA%40mail.gmail.com?utm_medium=email&utm_source=footer>.
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/everything-list/0ff3e12e-2fa3-4337-b544-455ccb177791%40gmail.com.