On Sun, Jun 16, 2024, 10:26 PM PGC <multiplecit...@gmail.com> wrote:

> A lot of the excitement around LLMs is due to confusing skill/competence
> (memory based) with the unsolved problem of intelligence, its most
> optimal/perfect test etc. There is a difference between completing strings
> of words/prompts relying on memorization, interpolation, pattern
> recognition based on training data and actually synthesizing novel
> generalization through reasoning or synthesizing the appropriate program on
> the fly. As there isn't a perfect test for intelligence, much less
> consensus on its definition, you can always brute force some LLM through
> huge compute and large, highly domain specific training data, to "solve" a
> set of problems; even highly complex ones. But as soon as there's novelty
> you'll have to keep doing that. Personally, that doesn't feel like
> intelligence yet. I'd want to see these abilities combined with the program
> synthesis ability; without the need for ever vaster, more specific
> databases etc. to be more convinced that we're genuinely on the threshold.


I think there is no more to intelligence than patter recognition and
extrapolation (essentially, the same techniques required for improving
compression). It is also the same thing science is concerned with:
compressing observations of the real world into a small set of laws
(patterns) which enable predictions. And prediction is the essence of
intelligent action, as all goal-centered action requires predicting
probable outcomes that may result from any of a set of possible behaviors
that may be taken, and then choosing the behavior with the highest expected
reward.

I think this can explain why even a problem as seemingly basic as "word
prediction" can (when mastered to a sufficient degree) break through into
general intelligence. This is because any situation can be described in
language, and being asked to predict next words requires understanding the
underlying reality to a sufficient degree to accurately model the things
those words describe. I confirmed this by describing an elaborate physical
setup and asked GPT-4 to predict and explain what it thought would happen
over the next hour. It did so perfectly, and also explained the
consequences of various alterations I later proposed.

Since any of thousands, or perhaps millions, of patterns exist in the
training corpus, language models can come to learn, recognize, and
extrapolate all of those thousands or millions of patterns. This is what we
think of as generality (a sufficiently large repertoire of pattern
recognition that it appears general).

Jason



> John, as you enjoyed that podcast with Aschenbrenner, you might find the
> following one with Chollet interesting. Imho you cannot scale past not
> having a more advanced approach to program synthesis (which nonetheless
> could be informed or guided by LLMs to deal with the combinatorial
> explosion of possible program synthesis).
>
> https://www.youtube.com/watch?v=UakqL6Pj9xo
> On Friday, June 14, 2024 at 7:28:50 PM UTC+2 John Clark wrote:
>
>> Sabine Hossenfelder came out with a video attempting to discredit Leopold
>> Aschenbrenner. She failed.
>>
>> Is the Intelligence-Explosion Near? A Reality Check
>> <https://www.youtube.com/watch?v=xm1B3Y3ypoE&t=553s>
>>
>> I wrote this in the comment section of the video:
>>
>> "You claim that AI development will slow because we will run out of
>> data, but synthetic data is already being used to train AIs and it actually
>> works! AlphaGo was able to go from knowing nothing about the most
>> complicated board game in the world called "GO" to being able to play it at
>> a superhuman level in just a few hours by using synthetic data, it played
>> games against itself. As for power, during the last decade the total power
>> generation of the US has remained flat, but during that same decade the
>> power generation of China has not, in just that same decade China
>> constructed enough new power stations to equal power generated by the
>> entire US. So a radical increase in electrical generation capacity is
>> possible, the only thing that's lacking is the will to do so. When it
>> becomes obvious to everybody that the first country to develop a super
>> intelligent computer will have the capability to rule the world there
>> will be a will to build those power generating facilities as fast as
>> humanly possible. Perhaps they will use natural gas, perhaps they will use
>> nuclear fission."
>>
>>   John K Clark    See what's on my new list at  Extropolis
>> <https://groups.google.com/g/extropolis>
>> hid
>>
>>
>>
>>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/1a991958-5828-4405-83b1-5c8a6671dad6n%40googlegroups.com
> <https://groups.google.com/d/msgid/everything-list/1a991958-5828-4405-83b1-5c8a6671dad6n%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUj6vxuhNZ2_E9%2BYhn-LdwJT-WOSnLQG77hUxiTpsLycTg%40mail.gmail.com.

Reply via email to