Am Sa, 18. Mär 2023, um 08:49, schrieb smitra:

> So, in the video we see that it got a question wrong because it thought 
> that 33 is a prime number. I would be more impressed by a system that 
> may make many more mistakes like that than this GPT system made, but 
> where there is a follow-up conversation where the mistakes are pointed 
> out and the system shows that it has learned and then gets similar 
> questions that it would previously have gotten wrong given the previous 
> answers, correct.

Exactly, very well said.

These models are stateless. Conversations are simulated by re-feeding the 
entire conversation so far over and over. Not only are we humans not stateless, 
but our brain constantly modifies itself at the same time that it is operating. 
And it does this to maintain an ongoing, persistent and coherent model of 
reality. This model includes our internal model of the people we know, of what 
might be going on in their own minds, their long term history and their facial 
expression right now. Memories are formed, that are constantly and coherently 
embedded into this internal map.

John Clark will probably dismiss this as some minor technical hurdle along the 
way to AI glory. I am not so sure.

State and self-modification require recurrence. So does Turing completeness. 
Our brain has recurrent connections, but the vanishing gradient problem seems 
to make them hard to impossible to train at scale with gradient descent. So we 
need an algorithm that works with recurrent connections at huge scales. I bet 
that this algorithm will have to be descentralized, which is to say: operating 
at the local level, of the neighborhood of each node in the network. The 
reasons I bet on an emergent, decentralized learning algorithm:

(1) That's how it works in nature;
(2) Incredibly smart people have been trying very hard for more than half a 
century and the centralized, explicit algorithm that can do what I describe 
above still eludes us -- I am not saying that it shows that such an algo does 
not exist, but I am saying that we are probably too dumb to find it.

Telmo

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/1a012b8d-8451-4dce-b176-8ba9c4c9a767%40app.fastmail.com.

Reply via email to