On Fri, Jun 28, 2024 at 11:57 AM PGC <multiplecit...@gmail.com> wrote:

> Jason,
>
> There's no universal consensus on intelligence beyond the broad outlines
> of the narrow vs general distinction. This is reflected in our informal
> discussion: some emphasize that effective action should be the result and
> are satisfied with a certain set and level of capabilities. However, I'm
> less sure whether that paints a complete picture. "General" should mean
> what it means. Brent talks about an integration system that does the
> modelling. But reflection, even the redundant kind that doesn't immediately
> yield anything may lead to a Russell coffee break moment. That seems to
> play a role, with people taking years, decades, generations, and even
> entire civilizations to discover that a problem may be ill-posed,
> unsolvable, or solvable.
>
> We look at historical developments and ask whether all of it is required
> to have one Newton appear every now and then. Or whether we could've had 10
> every generation with different values or politics, for instance. Those
> would be gigantic simulations to run, but who knows? Maybe we could get to
> Euclidean geometry far more cheaply than we did. Instead, we are making
> gigantic investments into known machine learning techniques with huge
> hardware boosts, calling it AI for marketing reasons (with many marketing
> MBA types becoming "Chief AI Officer" because they have a chatGPT
> subscription), to build robots to be our servants, maids, assistants, and
> secretaries.
>
> I'm not trying to play jargon police or anything—everyone has a right to
> take part in the intelligence discussion. But imho it's misleading to
> associate developments in machine learning through hardware advances with
> true intelligence.
>
I also see it as surprising that through hardware improvements alone, and
without specific breakthroughs in algorithms, we should see such great
strides in AI. But I also see a possible explanation. Nature has likewise
discovered something, which is relatively simple in its behavior and
capabilities, yet, when aggregated into ever larger collections yields
greater and greater intelligence and capability: the neuron.

There is relatively little difference in neurons across mammals. A rat
neuron is little different from a mouse neuron, for example. Yet a human
brain has several thousand times more of them than a mouse brain does, and
this difference in scale, seems to be the only meaningful difference
between what mice and humans have been able to accomplish.

Deep learning, and the progress in that field, is a microcosm of this
example from nature. The artificial neuron is proven to be "a universal
function learner." So the more of them there are aggregated together in one
network, the more rich and complex functions they can learn to approximate.
Humans no longer write the algorithms these neural networks derive, the
training process comes up with them. And much like the algorithms
implemented in the human brain, they are in a representation so opaque and
that they escape our capacity to understand.

So I would argue, there have been massive breakthroughs in the algorithms
that underlie the advances in AI, we just don't know what those
breakthroughs are.

These algorithms are products of systems which have (now) trillions of
parts. Even the best human programmers can't know the complete details of
projects with around a million lines of code (nevermind a trillion).

So have trillion-parameter neural networks unlocked the algorithms for true
intelligence? How would we know once they had?

Might it happen at 100B, 1T, 10T, or 100T parameters? I think the human
brain, with its 600T connections might signal an upper bound for how many
are required, but the brain does a lot of other things too, so the bound
could be lower.



> Of course, there can be synergistic effects that Minsky speculates about,
> but we can hardly manage resource allocation for all persons with actual
> AGI abilities globally alive today, which makes me pretty sure that this
> isn't what most people want. They want servants that are maximally
> intelligent to do what they are told, revealing something about our own
> desires. This is the desire for people as tools.
>
> Personally, I lean towards viewing intelligence as the potential to
> reflect plus remaining open to novel approaches to any problem. Sure,
> capability/ability is needed to solve a problem, and intelligence is
> required to see that, but at some point in acquiring abilities, folks seem
> to lose the ability to consider fundamentally novel approaches, often
> ridiculing them etc. There seems to be a point where ability limits the
> potential for new approaches to a problem.
>

Yes, this is what Bruno considers the "competence" vs. "intelligence"
distinction. He might say that a baby is maximally intelligent, yet
minimally competent.


> Children and individuals less constrained by personal beliefs and
> ideologies are often more intelligent in this view because their potential
> to change and synthesize new approaches is a genuine reflection of
> accessing a potentially infinite possibility space of problem
> formulations/solutions; or even choosing to let a problem be and drop it. I
> prefer this approach as it keeps the subject as a first principle, instead
> of labeling them as dumb for failing a memorization test or being an
> inadequate slave for some
>
> Even though it's a clever marketing strategy for Silicon Valley to extract
> money and value from us while training their models, I don't dismiss LLMs
> in principle and think the discussions they raise can be beneficial. It’s
> revealing to see where they fail and how we can make them appear
> intelligent by "cheating." This opens up new problems, such as designing
> tests that require on-the-fly creativity, potentially better than the ARC
> test. Could we tune mathematical or STEM education to be more creative
> through such problems, allowing for many possible solutions? This might
> shed light on different creative and/or reasoning styles and open the door
> to optimizing education for them (instead of the memorization maximization
> paradigms in place with most testing, which is why the pedagogical
> community is panicking with their students using AI).
>
> In this way or similarly, education could approach the creative component
> in STEM fields and perform research on whether this enhances general human
> problem formulation and solving, supplementing the less constrained, more
> open approaches found in the arts. This isn't about revolutionizing
> anything but seeing research potential.
>
> To address your question: even if we could combine all existing AIs into a
> single robot, I doubt it would constitute general intelligence. The
> aggregated capabilities might seem impressive, but I speculate that general
> intelligence involves continuous learning, adaptation, and particularly
> reflection beyond current AI's capacity. It would require integrating these
> capabilities in a way that mirrors human cognitive processes as Brent
> suggested, which I feel we are far from achieving. But now, who knows what
> happens behind closed doors with a former NSA person on the board of
> OpenAI? We can guess.
>

Would you agree that this (relatively simple in conception, though
computationally intractable in practice) algorithm produces general
intelligence: https://en.wikipedia.org/wiki/AIXI (more details:
https://arxiv.org/abs/cs/0004001 )

One thing I like about framing intelligence in this way, even if it is not
practically useful, it helps us recognize the key aspects that are required
for something to behave intelligently.

Jason


>
> On Wednesday, June 26, 2024 at 9:46:23 PM UTC+2 Jason Resch wrote:
>
>> On Wed, Jun 26, 2024 at 3:33 PM PGC <multipl...@gmail.com> wrote:
>>
>>> Your excitement about Claude 3.5 Sonnet's performance is understandable.
>>> It's an impressive development, but it's crucial to remember that beating
>>> benchmarks or covering a wide range of conversational topics does not
>>> equate to general intelligence. I wish we lived in a context where I could
>>> encourage you to provide evidence for your claims about AI capabilities and
>>> future predictions but Claude, OpenAI, etc are... not exactly open.
>>>
>>> Then we could discuss empirical data and trends instead of betting: I
>>> don't know what the capability ceiling is, for narrow AI development behind
>>> closed doors now or in the next years, nor have I pretended to.
>>> Wide/general is not narrow/specific and brittle. But I am happy for you if
>>> you feel that you can converse intelligently with it; I know what you mean.
>>> For my taste its a tad obsequious and not very original, i.e. I am
>>> providing all the originality of the conversation that some large
>>> corporation is sucking up without getting paid for it.
>>>
>>>
>>> *I don't want clever conversationI never want to work that hard, mmm - 
>>> *Billy
>>> Joel
>>>
>>
>> PGC,
>>
>> Would you consider the aggregate capabilities of all AIs that have been
>> created to date, as a general intelligence? In the spirit of what Minsky
>> said here:
>>
>> "Each practitioner thinks there’s one magic way to get a machine to be
>> smart, and so they’re all wasting their time in a sense. On the other hand,
>> each of them is improving some particular method, so maybe someday in the
>> near future, or maybe it’s two generations away, someone else will come
>> around and say, ‘Let’s put all these together,’ and then it will be smart."
>> -- Marvin Minsky
>>
>> I wrote that human general intelligence, consists of the following
>> abilities:
>>
>>    - Communicate via natural language
>>    - Learn, adapt, and grow
>>    - Move through a dynamic environment
>>    - Recognize sights and sounds
>>    - Be creative in music, art, writing and invention
>>    - Reason with logic and rationality to solve problems
>>
>> I think progress exists across each of these domains. While the best
>> humans in their area of expertise may beat the best AIs, it is arguable
>> that the AI systems which exist in these domains are better than the
>> average human in that area.
>>
>> This article I wrote in 2020 is quite dated, but it shows that even back
>> then, we have machines that could be called creative:
>>
>> https://alwaysasking.com/when-will-ai-take-over/#Creative_abilities_of_AI
>>
>> If we could somehow clobber together all the AIs that we have made so
>> far, and integrate them into a robot body. Would that be something we could
>> regard as generally intelligent? And if not, what else would need to be
>> done?
>>
>> Jason
>>
>>
>>
>>> On Monday, June 24, 2024 at 11:02:05 PM UTC+2 John Clark wrote:
>>>
>>>> On Mon, Jun 24, 2024 at 10:00 AM PGC <multipl...@gmail.com> wrote:
>>>>
>>>>
>>>>> *> And for everybody here assuming the Mechanist ontology, which
>>>>> implies the Strong AI thesis, i.e. the assertion that a machine can 
>>>>> think,*
>>>>>
>>>>
>>>> I don't know about everybody but I certainly have that view because the
>>>> only alternative is vitalism, the idea that only life, especially
>>>> human life, has a special secret sauce that is not mechanistic, that is to
>>>> say does not follow the same laws of physics as non-living things.  And
>>>> that view has been thoroughly discredited since 1859 when Darwin wrote "The
>>>> Origin Of Species".
>>>>
>>>>
>>>>
>>>>> *> I am curious as to why any of you would assume that general
>>>>> intelligence and mind would arise from a narrow AI.*
>>>>>
>>>>
>>>> If a human could converse with you as intelligently as Claude can in
>>>> such a wide number of unrelated topics you would never call his range of
>>>> interest narrow, but because Claude's brain is hard and dry and not soft
>>>> and squishy you do.  I'll tell you what let's make a bet, I bet that an AI
>>>> will win the International Mathematical Olympiad in less than 3 years,
>>>> perhaps much less. I also bet that in less than 3 years the main political
>>>> issue in every major country will not be unlawful immigration or crime or
>>>> even an excess in wokeness, it will be what to do about AI which is taking
>>>> over jobs at an accelerating rate.  What do you bet?
>>>>
>>>>
>>>> John K Clark    See what's on my new list at  Extropolis
>>>> <https://groups.google.com/g/extropolis>
>>>> bwu
>>>>
>>>>
>>>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Everything List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to everything-li...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/everything-list/4bb09c16-61df-4b07-a024-eae5eafffb90n%40googlegroups.com
>>> <https://groups.google.com/d/msgid/everything-list/4bb09c16-61df-4b07-a024-eae5eafffb90n%40googlegroups.com?utm_medium=email&utm_source=footer>
>>> .
>>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/0236e7e0-d07e-4edc-ba29-6c1ac87fcb97n%40googlegroups.com
> <https://groups.google.com/d/msgid/everything-list/0236e7e0-d07e-4edc-ba29-6c1ac87fcb97n%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUgzXvqLiMfmXQBZFMkPisUCoZcfCMd7xhF9OKUPgyu%2BfQ%40mail.gmail.com.

Reply via email to