The question offered up 6 weeks ago was how does the similarity to animal 
brains arise from a Server Farm? 
At this point, I claim it doesn't and that 3 and 4 are clever Language Machines.
To the claim that via magic, a consciousness arises in silicon or gallium 
arsenide seems a tall order. I have seen no article by any computer scientist, 
neurobiologist, or physicist, indicating HOW computer consciousness arose? If 
there is something out there, somebody please present a link to this august 
mailing-list. 
Now, for life arising out of chemicals on planet earth, I stumbled upon this 
yesterday. The theory is called Nickleback (O Canada!)

Scientists Have Found Molecule That Is Behind The Origin Of Life On Earth? Read 
To Know
https://www.republicworld.com/science/space/scientists-have-found-molecule-that-is-behind-the-origin-of-life-on-earth-read-to-know-articleshow.html

Somebody come up with a theory that network systems can accidentally produce a 
human level mind, before we celebrate chat4 overmuch. 
Let humans come up with a network that invents technology that produce 
inventions that humans alone would not have arrived at for decades of 
centuries! That, would be the big breakthrough, and not a fun chatbox.




-----Original Message-----
From: Telmo Menezes <te...@telmomenezes.net>
To: Everything List <everything-list@googlegroups.com>
Sent: Tue, Mar 14, 2023 11:45 am
Subject: Re: The connectome and uploading

#yiv1573443158 p.yiv1573443158MsoNormal, #yiv1573443158 
p.yiv1573443158MsoNoSpacing{margin:0;}#yiv1573443158 p.yiv1573443158MsoNormal, 
#yiv1573443158 p.yiv1573443158MsoNoSpacing{margin:0;}

Am Di, 14. Mär 2023, um 13:48, schrieb John Clark:

On Tue, Mar 14, 2023 at 7:31 AM Telmo Menezes <te...@telmomenezes.net> wrote:



> One of the authors of the article says "It’s interesting that the 
>computer-science field is converging onto what evolution has discovered", he 
>said that because it turns out that 41% of the fly brain's neurons are in 
>recurrent loops that provide feedback to other neurons that are upstream of 
>the data processing path, and that's just what we see in modern AIs like 
>ChatGPT.


> I do not think this is true. ChatGPT is a fine-tuned Large Language Model 
> (LLM), and LLMs use a transformer architecture, which is deep but purely 
> feed-forward, and uses attention heads. The attention mechanism was the big 
> breakthrough back in 2017, that finally enabled the training of such big 
> models:


I was under the impression that transformers are superior to recurrent neural 
networks because recurrent processing of data was not necessary with 
transformers so more paralyzation is possible than with recursive neural 
networks; it can analyze an entire sentence at once and doesn't need to do so 
word by word.  So Transformers learn faster and need less trading data.


It is true that transformers are faster for the reason you say, but the 
vanishing gradient problem was definitely an issue. Right before transformers, 
the dominant architecture was LSTM, which was recurrent but designed in such a 
way as to deal with the vanishing gradient:

https://en.wikipedia.org/wiki/Long_short-term_memory

Memory is the obvious way to deal with context, but like you say transformers 
consider the entire sentence (or more) all at once. Attention heads allow for 
parallel learning to focus on several aspects of the sentence at the same time, 
and then combining them at higher and higher layers of abstraction.

I do not think that any of this has any impact on the size of the training 
corpus required.




> My intuition is that if we are going to successfully imitate biology we must 
> model the various neurotransmitters.


That is not my intuition. I see nothing sacred in hormones,


I agree that there is nothing sacred about hormones, the only important thing 
is that there are several of them, with different binding properties. Current 
artificial neural networks (ANNs) only have one type of signal between neurons, 
the activation signal. Our brains can signal different things, importantly 
using dopamine to regulate learning -- and thus serve as a building block for a 
decentralized, emergent learning algorithm that clearly can deal with recursive 
connections with no problem.

With recursive connections a NN becomes Turing complete. I would be extremely 
surprised if Turing completeness turns out to not be a requirement for AGI.


I don't see the slightest reason why they or any neurotransmitter would be 
especially difficult to simulate through computation, because chemical 
messengers are not a sign of sophisticated design on nature's part, rather it's 
an example of Evolution's bungling. If you need to inhibit a nearby neuron 
there are better ways of sending that signal then launching a GABA molecule 
like a message in a bottle thrown into the sea and waiting ages for it to 
diffuse to its random target.


Of course, they are easy to simulate. Another question is if they are easy to 
simulate at the speed that we can perform gradient descent using contemporary 
GPU architectures. Of course, this is just a technical problem, not a 
fundamental one. What is more fundamental (and apparently hard) is to know 
*what* to simulate, so that a powerful learning algorithm emerges from such 
local interactions.

Neuroscience provides us with a wealth of information about the biological 
reality of our brains, but what to abstract from this to create the master 
learning algorithm that we crave is perhaps the crux of the matter. Maybe it 
will take an Einstein level of intellect to achieve this breakthrough.


I'm not interested in brain chemicals, only in the information they contain, if 
somebody wants  information to get transmitted from one place to another as 
fast and reliablely as possible, nobody would send smoke signals if they had a 
fiber optic cable. The information content in each molecular message must be 
tiny, just a few bits because only about 60 neurotransmitters such as 
acetylcholine, norepinephrine and GABA are known, even if the true number is 
100 times greater (or a million times for that matter) the information content 
of each signal must be tiny. Also, for the long range stuff, exactly which 
neuron receives the signal can not be specified because it relies on a random 
process, diffusion. The fact that it's slow as molasses in February does not 
add to its charm.  


I completely agree, I am not fetishizing the wetware. Silicon is much faster.

Telmo


If your job is delivering packages and all the packages are very small, and 
your boss doesn't care who you give them to as long as they're on the correct 
continent, and you have until the next ice age to get the work done, then you 
don't have a very difficult profession.  Artificial neurons could be made to 
communicate as inefficiently as natural ones do by releasing chemical 
neurotransmitters if anybody really wanted to, but it would be pointless when 
there are much faster, and much more reliable, and much more specific ways of 
operating.

John K Clark    See what's on my new list at  Extropolis
kuh



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv089oC%3DAc-DswW5simNfWzQsGAZADjusaWOacE4M6kt9g%40mail.gmail.com.


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/5076bb8a-99dd-4673-965f-92c6d35edc70%40app.fastmail.com.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/1412189407.215328.1678880860736%40mail.yahoo.com.

Reply via email to