This is very nice. I did some stuff with the previously available complete 
connectome (C. elegans).

But:

Am Di, 14. Mär 2023, um 12:05, schrieb John Clark:
> One of the authors of the article says "*It’s interesting that the 
> computer-science field is converging onto what evolution has discovered*", he 
> said that because it turns out that 41% of the fly brain's neurons are in 
> recurrent loops that provide feedback to other neurons that are upstream of 
> the data processing path, and that's just what we see in modern AIs like 
> ChatGPT. 

I do not think this is true. ChatGPT is a fine-tuned Large Language Model 
(LLM), and LLMs use a transformer architecture, which is deep but purely 
feed-forward, and uses attention heads. The attention mechanism was the big 
breakthrough back in 2017, that finally enabled the training of such big models:

https://proceedings.neurips.cc/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html

Recurrent networks have been tried for decades precisely because of their 
biologically plausibility, but they suffer from the "vanishing gradient" 
problem. In simple terms, recurrence means that an input from a long time ago 
can remain important, but it becomes increasingly hard for gradient descent 
algorithms to assign the correct importance to the weights. So in this case, 
the breakthrough was achieved by moving away from biological plausibility.

I think that part of the reason for this is that although neural network 
topology is biologically inspired, the dominant learning algorithms are 
centralized top-down (gradient descent). Learning algorithms in our own brain 
are certainly much more decentralized / emergent / distributed. I do not think 
we cracked them yet. I imagine recurrent NNs will be back once we do. My 
intuition is that if we are going to successfully imitate biology we must model 
the various neurotransmitters. There is a reason why we have several of them 
(and all sorts of drugs that imitate them and can bind selectively). This 
contrasts with the "single signal type" approach of contemporary artificial NNs 
-- which is very handy because it really fits linear algebra and thus GPU 
architectures.

Telmo

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/6b5aa8e7-9091-467f-9ca6-3d2fbc15e644%40app.fastmail.com.

Reply via email to