On Wed, Jun 19, 2024 at 6:05 PM Brent Meeker <meekerbr...@gmail.com> wrote:

> You can always add some randomness to a computer program.  LLM's aren't
> deterministic now.  Human intelligence may very well be memory plus
> randomness, although I'd bet on the inclusion of some inference
> algorithms.  The randomness doesn't even have to be in the brain.  People
> interact with their environment which provides a lot of effective
> randomness plus some relevant prompts.
>

Yes, I think there is no great mystery to creativity. It requires only 1.
random permutation/combination, and 2. an evaluation function: *how much
better is this new thing compared to the previous thing?* This is the
driver behind all the innovation in biology produced by natural selection.
And this same mechanism is replicated in the technique of "genetic
programming <https://en.wikipedia.org/wiki/Genetic_programming>." Koza, who
invented genetic programming, used it to create his "invention machine
<https://www.popsci.com/scitech/article/2006-04/john-koza-has-built-invention-machine/>"
which has created patent-worthy improvements across multiple domains of
technology.

I use genetic programming to evolve bots, and in only a few generations,
they move from stumbling around at random, to deriving unique,
environment-specific strategies to maximize their ability to feed
themselves while avoiding obstacles:

https://www.youtube.com/watch?v=InBsqlWQTts&list=PLq_mdJjNRPT11IF4NFyLcIWJ1C0Z3hTAX&index=2

There is no intelligence imparted to the design of the bots. They evolve
purely based on random variation of traits of the top performers (as
evaluated based on how much they ate during their life).

Jason


>
>
> On 6/19/2024 5:55 AM, PGC wrote:
>
> I'm hypothesizing here, as the nature of intelligence is still a mystery.
> Thank you, Terren, for your thoughtful contribution. You aptly highlight
> the confusion between skill and intelligence. Jason and John could be
> right; intelligence might emerge from advanced LLMs. The recent
> achievements are impressive. The differences between models like Gemini and
> ChatGPT might stem from better data curation rather than compute power.
>
> However, I see LLMs currently more as assistants that help us organize and
> structure our work more efficiently. Terence Tao isn't talking about
> replacing mathematicians but about enhancing collaboration and
> verification. If LLMs were truly intelligent, all jobs, including AI
> researchers', would soon vanish. But I don't foresee real engineers, AI
> researchers, or IT departments being replaced in the short to mid-term.
> There's too much novelty and practical knowledge involved in complex human
> work that LLMs can't replicate.
>
> Take engineers, for example. Much of their work relies on practical
> experience and intuition developed over years. LLMs aren't producing
> groundbreaking results like Ramanujan's infinite series etc; they're more
> about aiding in tasks like automated theorem proving. Intelligence might
> just be memory and vast training data, but I believe there's an element of
> freedom in human reasoning that leads to novel ideas.
>
> Consider Russell's best ideas coming while walking to the coffee machine.
> This unstructured thinking grants fresh perspectives. Creativity often
> involves discarding old approaches, a process that presupposes freedom.
> Machines would need to run long or even endlessly, reasoning in inscrutable
> code, which is neither practical nor desirable. Or somebody finds something
> that would bring inference to LLMs to effectively reduce the infinite space
> of all possible programs for effective synthesis of new programs. Fully
> deterministic and static programs are not enough to deal with the complex
> situations we face everyday. There's always some element of novelty that we
> have to deal with, combining reasoning and memory.
>
> Ultimately, while everyone appreciates a helpful assistant, few truly seek
> machines that challenge our understanding or autonomy. That's why I find
> the way we talk about LLMs and AGI a bit disingenuous. And no this is not a
> case of setting the bar higher and higher to preserve some kind of notion
> of human superiority. If all those jobs are replaced in short order, I'll
> just be wrong empirically speaking, and you can all make fun of these posts
> and yell "told you so".
>
> On Tuesday, June 18, 2024 at 9:24:07 PM UTC+2 Jason Resch wrote:
>
>>
>>
>> On Sun, Jun 16, 2024, 10:26 PM PGC <multipl...@gmail.com> wrote:
>>
>>> A lot of the excitement around LLMs is due to confusing skill/competence
>>> (memory based) with the unsolved problem of intelligence, its most
>>> optimal/perfect test etc. There is a difference between completing strings
>>> of words/prompts relying on memorization, interpolation, pattern
>>> recognition based on training data and actually synthesizing novel
>>> generalization through reasoning or synthesizing the appropriate program on
>>> the fly. As there isn't a perfect test for intelligence, much less
>>> consensus on its definition, you can always brute force some LLM through
>>> huge compute and large, highly domain specific training data, to "solve" a
>>> set of problems; even highly complex ones. But as soon as there's novelty
>>> you'll have to keep doing that. Personally, that doesn't feel like
>>> intelligence yet. I'd want to see these abilities combined with the program
>>> synthesis ability; without the need for ever vaster, more specific
>>> databases etc. to be more convinced that we're genuinely on the threshold.
>>
>>
>> I think there is no more to intelligence than patter recognition and
>> extrapolation (essentially, the same techniques required for improving
>> compression). It is also the same thing science is concerned with:
>> compressing observations of the real world into a small set of laws
>> (patterns) which enable predictions. And prediction is the essence of
>> intelligent action, as all goal-centered action requires predicting
>> probable outcomes that may result from any of a set of possible behaviors
>> that may be taken, and then choosing the behavior with the highest expected
>> reward.
>>
>> I think this can explain why even a problem as seemingly basic as "word
>> prediction" can (when mastered to a sufficient degree) break through into
>> general intelligence. This is because any situation can be described in
>> language, and being asked to predict next words requires understanding the
>> underlying reality to a sufficient degree to accurately model the things
>> those words describe. I confirmed this by describing an elaborate physical
>> setup and asked GPT-4 to predict and explain what it thought would happen
>> over the next hour. It did so perfectly, and also explained the
>> consequences of various alterations I later proposed.
>>
>> Since any of thousands, or perhaps millions, of patterns exist in the
>> training corpus, language models can come to learn, recognize, and
>> extrapolate all of those thousands or millions of patterns. This is what we
>> think of as generality (a sufficiently large repertoire of pattern
>> recognition that it appears general).
>>
>> Jason
>>
>>
>>
>>> John, as you enjoyed that podcast with Aschenbrenner, you might find the
>>> following one with Chollet interesting. Imho you cannot scale past not
>>> having a more advanced approach to program synthesis (which nonetheless
>>> could be informed or guided by LLMs to deal with the combinatorial
>>> explosion of possible program synthesis).
>>>
>>> https://www.youtube.com/watch?v=UakqL6Pj9xo
>>> On Friday, June 14, 2024 at 7:28:50 PM UTC+2 John Clark wrote:
>>>
>>>> Sabine Hossenfelder came out with a video attempting to discredit
>>>> Leopold Aschenbrenner. She failed.
>>>>
>>>> Is the Intelligence-Explosion Near? A Reality Check
>>>> <https://www.youtube.com/watch?v=xm1B3Y3ypoE&t=553s>
>>>>
>>>> I wrote this in the comment section of the video:
>>>>
>>>> "You claim that AI development will slow because we will run out of
>>>> data, but synthetic data is already being used to train AIs and it actually
>>>> works! AlphaGo was able to go from knowing nothing about the most
>>>> complicated board game in the world called "GO" to being able to play it at
>>>> a superhuman level in just a few hours by using synthetic data, it played
>>>> games against itself. As for power, during the last decade the total power
>>>> generation of the US has remained flat, but during that same decade the
>>>> power generation of China has not, in just that same decade China
>>>> constructed enough new power stations to equal power generated by the
>>>> entire US. So a radical increase in electrical generation capacity is
>>>> possible, the only thing that's lacking is the will to do so. When it
>>>> becomes obvious to everybody that the first country to develop a super
>>>> intelligent computer will have the capability to rule the world there
>>>> will be a will to build those power generating facilities as fast as
>>>> humanly possible. Perhaps they will use natural gas, perhaps they will use
>>>> nuclear fission."
>>>>
>>>>   John K Clark    See what's on my new list at  Extropolis
>>>> <https://groups.google.com/g/extropolis>
>>>> hid
>>>>
>>>>
>>>>
>>>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Everything List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to everything-li...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/everything-list/1a991958-5828-4405-83b1-5c8a6671dad6n%40googlegroups.com
>>> <https://groups.google.com/d/msgid/everything-list/1a991958-5828-4405-83b1-5c8a6671dad6n%40googlegroups.com?utm_medium=email&utm_source=footer>
>>> .
>>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/a3ecb0a7-7a3f-417b-bb47-f449febb73e7n%40googlegroups.com
> <https://groups.google.com/d/msgid/everything-list/a3ecb0a7-7a3f-417b-bb47-f449febb73e7n%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/04fffd28-1a61-48a3-8a5e-d1af5b901caa%40gmail.com
> <https://groups.google.com/d/msgid/everything-list/04fffd28-1a61-48a3-8a5e-d1af5b901caa%40gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUhRy__GiN_oHv9Z%3DOigojO60xH9tp%2BDiskP3A%2BQwZt1CA%40mail.gmail.com.

Reply via email to