On Sun, Jun 16, 2024 at 10:26 PM PGC <multiplecit...@gmail.com> wrote:

*> A lot of the excitement around LLMs is due to confusing skill/competence
> (memory based) with the unsolved problem of intelligence,*


Intelligence was an unsolved problem but not anymore, it was solved about
18 months ago. Certainly if we thought we were dealing with a human and not
a machine you and I would say that person was quite intelligent.

* > There is a difference between completing strings of words*


Why? Basically all Einstein did was to complete this string of words "in
general the way things behave when they move close to the speed of light
and gravity becomes very strong is ...."

> *As there isn't a perfect test for intelligence, much less consensus on
> its definition,*


That is true, nevertheless despite that lack of a definition it has not
prevented us from judging that some of the human beings we have dealings
with are quite intelligent while others are extremely stupid. How are we
able to do that when we have no definition for intelligence? It's possible
because we have something much better than a definition, we have examples
of intelligent actions. After all, the definitions in a dictionary are made
of words and those words are also in the dictionary and they're also made
of words. The only thing that gets us out of that infinite Loop is
examples, it is the way lexicographers got the knowledge to write their
dictionary.



> *> you can always brute force some LLM through huge compute and large,
> highly domain specific training data, to "solve" a set of problems;*


I don't know what those quotation marks are supposed to mean but if you are
able to "solve" a set of problems then the problems have been solved, the
method of doing so is irrelevant. Are you sure you're not whistling past
the graveyard?

*> you might find the following interview with Chollet interesting*
> *Francois Chollet - LLMs won’t lead to AGI - $1,000,000 Prize to find true
> solution <https://www.youtube.com/watch?v=UakqL6Pj9xo>*



I watched the video, thank you for recommending it. I gave credit to Chollet
 for devising the ARC AI benchmark, and for giving a prize of $500,000 to
the first open source developer who makes an AI program that gets a 85% on
that benchmark, the average human is supposed to be able to get an 80% on
ARC, I'm a little skeptical that the average human could actually get a
score that high but never mind. It's true that most large AI programs don't
do very well on ARC, but Jack Cole wrote a very small AI program of only
240 million parameters that beat GPT4 on that benchmark despite the fact
that GPT 4 has 1.76 trillion parameters, 7300 times larger. And Cole's
program was running on  just one P100 processor that is about a 10th as
powerful as a H100, and yet Cole's program achieved a score of 34% on ARC,
not bad considering the fact that two years ago no program could do better
than 0%.  Chollet  says he wouldn't be impressed no matter how high a
program scored  on ARC unless he closely examined how the AI was trained
and was certain the good results were not a result of mere memorization
(whatever that means) and all the questions were 100% novel to it. But no
situation is ever 100% novel, if nothing else they all involve the
distribution of matter and energy in spacetime. In affect Chollet is saying
that if an AI passes a benchmark then there must be something wrong with
the benchmark. I think the man is more interested in making excuses than
finding the truth in a desperate attempt to preserve the last vestiges of
vitalism, the idea that humans and only humans have some sort of magical
secret sauce that a mere  machine could never emulate.

Also, Chollet doesn't do a very good job explaining why a machine, which is
supposed to be in capable of doing anything novel, nevertheless managed to
translate between English and Kalamang because:

*"learning to translate between English and Kalamang -- a language with
less than 200 speakers and therefore virtually no presence on the web --
using several hundred pages of field linguistics reference materials. This
task framing is novel in that it asks a model to learn a language from a
single human-readable book of grammar explanations, rather than a large
mined corpus of in-domain data"*


*A Benchmark for Learning to Translate a New Language from One Grammar Book
<https://arxiv.org/abs/2309.16575>*

And I was dumbfounded when Chollet said  OpenAI has held back progress
towards AI by 5 to 10 years because for the first time in about 50 years
they made something that actually worked and other companies had a hunch
that they may just be onto something!

 John K Clark    See what's on my new list at  Extropolis
<https://groups.google.com/g/extropolis>
jep

h





John, as you enjoyed that podcast with Aschenbrenner, you might find the
> following one with Chollet interesting. Imho you cannot scale past not
> having a more advanced approach to program synthesis (which nonetheless
> could be informed or guided by LLMs to deal with the combinatorial
> explosion of possible program synthesis).
>




>
>
> Sabine Hossenfelder came out with a video attempting to discredit Leopold
>> Aschenbrenner. She failed.
>>
>> Is the Intelligence-Explosion Near? A Reality Check
>> <https://www.youtube.com/watch?v=xm1B3Y3ypoE&t=553s>
>>
>> I wrote this in the comment section of the video:
>>
>> "You claim that AI development will slow because we will run out of
>> data, but synthetic data is already being used to train AIs and it actually
>> works! AlphaGo was able to go from knowing nothing about the most
>> complicated board game in the world called "GO" to being able to play it at
>> a superhuman level in just a few hours by using synthetic data, it played
>> games against itself. As for power, during the last decade the total power
>> generation of the US has remained flat, but during that same decade the
>> power generation of China has not, in just that same decade China
>> constructed enough new power stations to equal power generated by the
>> entire US. So a radical increase in electrical generation capacity is
>> possible, the only thing that's lacking is the will to do so. When it
>> becomes obvious to everybody that the first country to develop a super
>> intelligent computer will have the capability to rule the world there
>> will be a will to build those power generating facilities as fast as
>> humanly possible. Perhaps they will use natural gas, perhaps they will use
>> nuclear fission."
>>
>>
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv1CrBDT4JkXU7_RiEYF-figcqG8V7QfXV4j_YGXeLA7kg%40mail.gmail.com.

Reply via email to