GPT Full Breakdown

2023-03-14 Thread John Clark
*GPT 4: Full Breakdown (14 Crazy Details You May Have Missed) - Last One is
Extra Wild*


John K ClarkSee what's on my new list at  Extropolis

fbd

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv12APCO3CypXfc4cBGs1ZTp-%3DF8huY-YZs1PcHaOQCFzg%40mail.gmail.com.


NYTimes.com: OpenAI Plans to Up the Ante in Tech’s AI Race with GPT-4

2023-03-14 Thread John Clark
Check out this article from The New York Times. Because I'm a subscriber,
you can read it through this gift link without a subscription.

OpenAI Plans to Up the Ante in Tech’s A.I. Race

The company unveiled new technology called GPT-4 four months after its
ChatGPT stunned Silicon Valley. The update is an improvement, but it
carries some of the same baggage.

https://www.nytimes.com/2023/03/14/technology/openai-gpt4-chatgpt.html?unlocked_article_code=JXhaIRzlWIMwOcP4Jss0itZAxYmmK8mGKO6zjpk-FeqB7ysQefbnbUMPM5SMVddEyOfMk-t69lwm7ZnTj6Medxc5tYSxfX2fm8ndb8zz6gz2nEIkjn4byjMMEH6AcpbYQeSQTS2HXJATz5kzyiw5T6YrviGlMRvcuTGGrx98ahADdqnMgx1y4jT0zvSb6ZiBp-_MPC9z7UrHVlQ0kusQ7FmVMi0fCsup7ORwhILdK73lXFSID71OP4IyQAWyPiS2P6HETbUHKTk_FHXa57yq42cJBoHQPSAr6OIVHhFo3xXthHbPtVklinK0J02FTN_4EaajDW1CEL_Z8M5W-QaoFA=em-share

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv3BzGjmuy0%2BP5pCmTj4zsZirDm_yTjn2iUCAcxruhdh7g%40mail.gmail.com.


Does rationalism lend itself to nation building?

2023-03-14 Thread Joel Ðietz
The nexus of rationalist thinkers has provided some of the most
incisive writing on proto-nations to date (
https://astralcodexten.substack.com/p/prospectus-on-prospera), but does
this school of thought have sufficient method or heft to lend itself to
providing the fully integrated mental apparatus to use at the foundational
stage of nation design?

For this, we refer to Balaji's well noted "Network State" which emerged
recently with the concept "design a state in VR and then push a button to
deploy," which at least presumably, is backed by sufficient amount of
capital and a cryptocurrency inspired economic model.

For what it's worth, on the product side we built at least the prototype
for the "city design in VR" part, but the relative part of going from
concept to a working system is often a large gap. This requires among other
things, an assessment of the would-be settlers, the ostensible rule of law,
including enforcement mechanisms, and whatever economic model is at play.

Additionally given the perceived lack of available space (at least on a map
it appears to be occupied by existing nations) there is the game theory of
how do existing nations review and respond to these micro-nations eager to
issue their own passports.

I ask this question, in part, because I am thinking of creating some rating
system for these 'startup societies' that includes factors like their
technological sophistication and other factors that might correlate with
long term success. I am, however, at an early enough stage that I have not
decided which factors to include.

Thus the floor is open for anyone with opinions, ideally with a rationalist
flavor.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAHWbU%3DZHZqcFS%2BaXQ0CddoRF_YjoHnZZKkNjCe%2B3OLnmDY1tuA%40mail.gmail.com.


Re: The connectome and uploading

2023-03-14 Thread Telmo Menezes


Am Di, 14. Mär 2023, um 13:48, schrieb John Clark:
> On Tue, Mar 14, 2023 at 7:31 AM Telmo Menezes  wrote:
> 
>>> > One of the authors of the article says "It’s interesting that the 
>>> > computer-science field is converging onto what evolution has discovered", 
>>> > he said that because it turns out that 41% of the fly brain's neurons are 
>>> > in recurrent loops that provide feedback to other neurons that are 
>>> > upstream of the data processing path, and that's just what we see in 
>>> > modern AIs like ChatGPT.
>> 
>> *> I do not think this is true. ChatGPT is a fine-tuned Large Language Model 
>> (LLM), and LLMs use a transformer architecture, which is deep but purely 
>> feed-forward, and uses attention heads. The attention mechanism was the big 
>> breakthrough back in 2017, that finally enabled the training of such big 
>> models:*
> 
> I was under the impression that transformers are superior to recurrent neural 
> networks because recurrent processing of data was not necessary with 
> transformers so more paralyzation is possible than with recursive neural 
> networks; it can analyze an entire sentence at once and doesn't need to do so 
> word by word.  So Transformers learn faster and need less trading data.

It is true that transformers are faster for the reason you say, but the 
vanishing gradient problem was definitely an issue. Right before transformers, 
the dominant architecture was LSTM, which was recurrent but designed in such a 
way as to deal with the vanishing gradient:

https://en.wikipedia.org/wiki/Long_short-term_memory

Memory is the obvious way to deal with context, but like you say transformers 
consider the entire sentence (or more) all at once. Attention heads allow for 
parallel learning to focus on several aspects of the sentence at the same time, 
and then combining them at higher and higher layers of abstraction.

I do not think that any of this has any impact on the size of the training 
corpus required.

> 
>> *> My intuition is that if we are going to successfully imitate biology we 
>> must model the various neurotransmitters.*
> 
> That is not my intuition. I see nothing sacred in hormones,

I agree that there is nothing sacred about hormones, the only important thing 
is that there are several of them, with different binding properties. Current 
artificial neural networks (ANNs) only have one type of signal between neurons, 
the activation signal. Our brains can signal different things, importantly 
using dopamine to regulate learning -- and thus serve as a building block for a 
decentralized, emergent learning algorithm that clearly can deal with recursive 
connections with no problem.

With recursive connections a NN becomes Turing complete. I would be extremely 
surprised if Turing completeness turns out to not be a requirement for AGI.

> I don't see the slightest reason why they or any neurotransmitter would be 
> especially difficult to simulate through computation, because chemical 
> messengers are not a sign of sophisticated design on nature's part, rather 
> it's an example of Evolution's bungling. If you need to inhibit a nearby 
> neuron there are better ways of sending that signal then launching a GABA 
> molecule like a message in a bottle thrown into the sea and waiting ages for 
> it to diffuse to its random target.

Of course, they are easy to simulate. Another question is if they are easy to 
simulate at the speed that we can perform gradient descent using contemporary 
GPU architectures. Of course, this is just a technical problem, not a 
fundamental one. What is more fundamental (and apparently hard) is to know 
*what* to simulate, so that a powerful learning algorithm emerges from such 
local interactions.

Neuroscience provides us with a wealth of information about the biological 
reality of our brains, but what to abstract from this to create the master 
learning algorithm that we crave is perhaps the crux of the matter. Maybe it 
will take an Einstein level of intellect to achieve this breakthrough.

> I'm not interested in brain chemicals, only in the information they contain, 
> if somebody wants  information to get transmitted from one place to another 
> as fast and reliablely as possible, nobody would send smoke signals if they 
> had a fiber optic cable. The information content in each molecular message 
> must be tiny, just a few bits because only about 60 neurotransmitters such as 
> acetylcholine, norepinephrine and GABA are known, even if the true number is 
> 100 times greater (or a million times for that matter) the information 
> content of each signal must be tiny. Also, for the long range stuff, exactly 
> which neuron receives the signal can not be specified because it relies on a 
> random process, diffusion. The fact that it's slow as molasses in February 
> does not add to its charm.  

I completely agree, I am not fetishizing the wetware. Silicon is much faster.

Telmo

> If your job is delivering packages and all 

Re: The connectome and uploading

2023-03-14 Thread Terren Suydam
On Tue, Mar 14, 2023 at 8:49 AM John Clark  wrote:

> On Tue, Mar 14, 2023 at 7:31 AM Telmo Menezes 
> wrote:
>
> *> My intuition is that if we are going to successfully imitate biology we
>> must model the various neurotransmitters.*
>
>
> That is not my intuition. I see nothing sacred in hormones, I don't see
> the slightest reason why they or any neurotransmitter would be especially
> difficult to simulate through computation, because chemical messengers are
> not a sign of sophisticated design on nature's part, rather it's an example
> of Evolution's bungling. If you need to inhibit a nearby neuron there are
> better ways of sending that signal then launching a GABA molecule like a
> message in a bottle thrown into the sea and waiting ages for it to diffuse
> to its random target.
>

I don't think the point is about the specific neurotransmitters (NTs) used
in biological brains, but that there are multiple NTs which each activate
separable circuits in the brain. It's probably adaptive to have multiple
NTs, to further modularize the brain's functionality. This may be an
important part of generalized intelligence.


> I'm not interested in brain chemicals, only in the information they
> contain, if somebody wants  information to get transmitted from one place
> to another as fast and reliablely as possible, nobody would send smoke
> signals if they had a fiber optic cable. The information content in each
> molecular message must be tiny, just a few bits because only about 60
> neurotransmitters such as acetylcholine, norepinephrine and GABA are known,
> even if the true number is 100 times greater (or a million times for that
> matter) the information content of each signal must be tiny. Also, for the
> long range stuff, exactly which neuron receives the signal can not be
> specified because it relies on a random process, diffusion. The fact that
> it's slow as molasses in February does not add to its charm.
>

Similarly, NTs that produce effects on different timescales, or in terms of
more diffuse targets, may provide functionality that a single, fast NT
cannot achieve. You might call it Evolutionary bungling, but it's not
necessarily the case that faster is always better.  I sometimes wonder how
an AI that could process information a million times faster than a human
could be capable of talking to humans. Imagine having to wait 20 years for
a response - subjectively, that's how it might feel to a super-fast AI.

Terren

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAMy3ZA_fxGMxWE-o8DzT7wWGimAJzV3B%2BOi4s3ozcP3-hfq4Ow%40mail.gmail.com.


Re: The connectome and uploading

2023-03-14 Thread Samiya Illias
Acknowledging the Perfection of our Lord

No change should there be in the creation of Allah [Quran 30:30] 
Mission of the Messengers - XXIX  




Abstract 
To do تَسْبِيحَ of Allah means to acknowledge, declare, and/or celebrate that 
Allah is absolutely perfect. Allah creates perfectly and governs excellently. 
We humans need to acknowledge and appreciate this fact, and consequently submit 
to The Right Religion (الدِّينُ الْقَيِّمُ). 


Full Text
https://signsandscience.blogspot.com/2018/10/acknowledging-perfection-of-our-lord.html
  


> On 14-Mar-2023, at 6:48 PM, John Clark  wrote:
> 
> 
>> On Tue, Mar 14, 2023 at 9:44 AM Samiya Illias  wrote:
>> 
>> > Aren’t you an emergent property of the same system that you are 
>> > criticising? 
> 
> Yes.
> 
> John K ClarkSee what's on my new list at  Extropolis
> uyc
> 
> 
> 
>  
>> 
>> 
>> 
 On 14-Mar-2023, at 5:49 PM, John Clark  wrote:
 
>>> 
>>> On Tue, Mar 14, 2023 at 7:31 AM Telmo Menezes  
>>> wrote:
>>> 
> > One of the authors of the article says "It’s interesting that the 
> > computer-science field is converging onto what evolution has 
> > discovered", he said that because it turns out that 41% of the fly 
> > brain's neurons are in recurrent loops that provide feedback to other 
> > neurons that are upstream of the data processing path, and that's just 
> > what we see in modern AIs like ChatGPT.
 
 
 > I do not think this is true. ChatGPT is a fine-tuned Large Language 
 > Model (LLM), and LLMs use a transformer architecture, which is deep but 
 > purely feed-forward, and uses attention heads. The attention mechanism 
 > was the big breakthrough back in 2017, that finally enabled the training 
 > of such big models:
>>> 
>>> I was under the impression that transformers are superior to recurrent 
>>> neural networks because recurrent processing of data was not necessary with 
>>> transformers so more paralyzation is possible than with recursive neural 
>>> networks; it can analyze an entire sentence at once and doesn't need to do 
>>> so word by word.  So Transformers learn faster and need less trading data.
>>> 
 > My intuition is that if we are going to successfully imitate biology we 
 > must model the various neurotransmitters.
>>> 
>>> That is not my intuition. I see nothing sacred in hormones, I don't see the 
>>> slightest reason why they or any neurotransmitter would be especially 
>>> difficult to simulate through computation, because chemical messengers are 
>>> not a sign of sophisticated design on nature's part, rather it's an example 
>>> of Evolution's bungling. If you need to inhibit a nearby neuron there are 
>>> better ways of sending that signal then launching a GABA molecule like a 
>>> message in a bottle thrown into the sea and waiting ages for it to diffuse 
>>> to its random target.
>>> 
>>> I'm not interested in brain chemicals, only in the information they 
>>> contain, if somebody wants  information to get transmitted from one place 
>>> to another as fast and reliablely as possible, nobody would send smoke 
>>> signals if they had a fiber optic cable. The information content in each 
>>> molecular message must be tiny, just a few bits because only about 60 
>>> neurotransmitters such as acetylcholine, norepinephrine and GABA are known, 
>>> even if the true number is 100 times greater (or a million times for that 
>>> matter) the information content of each signal must be tiny. Also, for the 
>>> long range stuff, exactly which neuron receives the signal can not be 
>>> specified because it relies on a random process, diffusion. The fact that 
>>> it's slow as molasses in February does not add to its charm.  
>>> 
>>> If your job is delivering packages and all the packages are very small, and 
>>> your boss doesn't care who you give them to as long as they're on the 
>>> correct continent, and you have until the next ice age to get the work 
>>> done, then you don't have a very difficult profession.  Artificial neurons 
>>> could be made to communicate as inefficiently as natural ones do by 
>>> releasing chemical neurotransmitters if anybody really wanted to, but it 
>>> would be pointless when there are much faster, and much more reliable, and 
>>> much more specific ways of operating.
>>> 
>>> John K ClarkSee what's on my new list at  Extropolis
>>> kuh
>>> 
>>> 
>>> -- 
>>> You received this message because you are subscribed to the Google Groups 
>>> "Everything List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an 
>>> email to everything-list+unsubscr...@googlegroups.com.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/everything-list/CAJPayv089oC%3DAc-DswW5simNfWzQsGAZADjusaWOacE4M6kt9g%40mail.gmail.com.
>> 
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Everything List" group.
>> To unsubscribe from this group and stop 

Re: The connectome and uploading

2023-03-14 Thread John Clark
On Tue, Mar 14, 2023 at 9:44 AM Samiya Illias 
wrote:

*> Aren’t you an emergent property of the same system that you are
> criticising? *
>

Yes.

John K ClarkSee what's on my new list at  Extropolis

uyc





>
>
>
> On 14-Mar-2023, at 5:49 PM, John Clark  wrote:
>
> 
> On Tue, Mar 14, 2023 at 7:31 AM Telmo Menezes 
> wrote:
>
> > One of the authors of the article says "It’s interesting that the
>>> computer-science field is converging onto what evolution has discovered",
>>> he said that because it turns out that 41% of the fly brain's neurons are
>>> in recurrent loops that provide feedback to other neurons that are upstream
>>> of the data processing path, and that's just what we see in modern AIs like
>>> ChatGPT.
>>
>>
>>
>> *> I do not think this is true. ChatGPT is a fine-tuned Large Language
>> Model (LLM), and LLMs use a transformer architecture, which is deep but
>> purely feed-forward, and uses attention heads. The attention mechanism was
>> the big breakthrough back in 2017, that finally enabled the training of
>> such big models:*
>>
>
> I was under the impression that transformers are superior to recurrent
> neural networks because recurrent processing of data was not necessary with
> transformers so more paralyzation is possible than with recursive neural
> networks; it can analyze an entire sentence at once and doesn't need to do
> so word by word.  So Transformers learn faster and need less trading data.
>
> *> My intuition is that if we are going to successfully imitate biology we
>> must model the various neurotransmitters.*
>
>
> That is not my intuition. I see nothing sacred in hormones, I don't see
> the slightest reason why they or any neurotransmitter would be especially
> difficult to simulate through computation, because chemical messengers are
> not a sign of sophisticated design on nature's part, rather it's an example
> of Evolution's bungling. If you need to inhibit a nearby neuron there are
> better ways of sending that signal then launching a GABA molecule like a
> message in a bottle thrown into the sea and waiting ages for it to diffuse
> to its random target.
>
> I'm not interested in brain chemicals, only in the information they
> contain, if somebody wants  information to get transmitted from one place
> to another as fast and reliablely as possible, nobody would send smoke
> signals if they had a fiber optic cable. The information content in each
> molecular message must be tiny, just a few bits because only about 60
> neurotransmitters such as acetylcholine, norepinephrine and GABA are known,
> even if the true number is 100 times greater (or a million times for that
> matter) the information content of each signal must be tiny. Also, for the
> long range stuff, exactly which neuron receives the signal can not be
> specified because it relies on a random process, diffusion. The fact that
> it's slow as molasses in February does not add to its charm.
>
> If your job is delivering packages and all the packages are very small,
> and your boss doesn't care who you give them to as long as they're on the
> correct continent, and you have until the next ice age to get the work
> done, then you don't have a very difficult profession.  Artificial neurons
> could be made to communicate as inefficiently as natural ones do by
> releasing chemical neurotransmitters if anybody really wanted to, but it
> would be pointless when there are much faster, and much more reliable, and
> much more specific ways of operating.
>
> John K ClarkSee what's on my new list at  Extropolis
> 
> kuh
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/CAJPayv089oC%3DAc-DswW5simNfWzQsGAZADjusaWOacE4M6kt9g%40mail.gmail.com
> 
> .
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/7E212EF5-8533-484A-AA62-BEF352C9C1D4%40gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To 

Re: The connectome and uploading

2023-03-14 Thread Samiya Illias
If you are so inefficiently wired, how come you can comment on the inefficiency 
of the system? Aren’t you an emergent property of the same system that you are 
criticising? 



> On 14-Mar-2023, at 5:49 PM, John Clark  wrote:
> 
> 
>> On Tue, Mar 14, 2023 at 7:31 AM Telmo Menezes  wrote:
>> 
 > One of the authors of the article says "It’s interesting that the 
 > computer-science field is converging onto what evolution has 
 > discovered", he said that because it turns out that 41% of the fly 
 > brain's neurons are in recurrent loops that provide feedback to other 
 > neurons that are upstream of the data processing path, and that's just 
 > what we see in modern AIs like ChatGPT.
>>> 
>>> 
>>> > I do not think this is true. ChatGPT is a fine-tuned Large Language Model 
>>> > (LLM), and LLMs use a transformer architecture, which is deep but purely 
>>> > feed-forward, and uses attention heads. The attention mechanism was the 
>>> > big breakthrough back in 2017, that finally enabled the training of such 
>>> > big models:
>> 
>> I was under the impression that transformers are superior to recurrent 
>> neural networks because recurrent processing of data was not necessary with 
>> transformers so more paralyzation is possible than with recursive neural 
>> networks; it can analyze an entire sentence at once and doesn't need to do 
>> so word by word.  So Transformers learn faster and need less trading data.
>> 
>> > My intuition is that if we are going to successfully imitate biology we 
>> > must model the various neurotransmitters.
> 
> That is not my intuition. I see nothing sacred in hormones, I don't see the 
> slightest reason why they or any neurotransmitter would be especially 
> difficult to simulate through computation, because chemical messengers are 
> not a sign of sophisticated design on nature's part, rather it's an example 
> of Evolution's bungling. If you need to inhibit a nearby neuron there are 
> better ways of sending that signal then launching a GABA molecule like a 
> message in a bottle thrown into the sea and waiting ages for it to diffuse to 
> its random target.
> 
> I'm not interested in brain chemicals, only in the information they contain, 
> if somebody wants  information to get transmitted from one place to another 
> as fast and reliablely as possible, nobody would send smoke signals if they 
> had a fiber optic cable. The information content in each molecular message 
> must be tiny, just a few bits because only about 60 neurotransmitters such as 
> acetylcholine, norepinephrine and GABA are known, even if the true number is 
> 100 times greater (or a million times for that matter) the information 
> content of each signal must be tiny. Also, for the long range stuff, exactly 
> which neuron receives the signal can not be specified because it relies on a 
> random process, diffusion. The fact that it's slow as molasses in February 
> does not add to its charm.  
> 
> If your job is delivering packages and all the packages are very small, and 
> your boss doesn't care who you give them to as long as they're on the correct 
> continent, and you have until the next ice age to get the work done, then you 
> don't have a very difficult profession.  Artificial neurons could be made to 
> communicate as inefficiently as natural ones do by releasing chemical 
> neurotransmitters if anybody really wanted to, but it would be pointless when 
> there are much faster, and much more reliable, and much more specific ways of 
> operating.
> 
> John K ClarkSee what's on my new list at  Extropolis
> kuh
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/CAJPayv089oC%3DAc-DswW5simNfWzQsGAZADjusaWOacE4M6kt9g%40mail.gmail.com.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/7E212EF5-8533-484A-AA62-BEF352C9C1D4%40gmail.com.


Re: The connectome and uploading

2023-03-14 Thread John Clark
On Tue, Mar 14, 2023 at 7:31 AM Telmo Menezes 
wrote:

> One of the authors of the article says "It’s interesting that the
>> computer-science field is converging onto what evolution has discovered",
>> he said that because it turns out that 41% of the fly brain's neurons are
>> in recurrent loops that provide feedback to other neurons that are upstream
>> of the data processing path, and that's just what we see in modern AIs like
>> ChatGPT.
>
>
>
> *> I do not think this is true. ChatGPT is a fine-tuned Large Language
> Model (LLM), and LLMs use a transformer architecture, which is deep but
> purely feed-forward, and uses attention heads. The attention mechanism was
> the big breakthrough back in 2017, that finally enabled the training of
> such big models:*
>

I was under the impression that transformers are superior to recurrent
neural networks because recurrent processing of data was not necessary with
transformers so more paralyzation is possible than with recursive neural
networks; it can analyze an entire sentence at once and doesn't need to do
so word by word.  So Transformers learn faster and need less trading data.

*> My intuition is that if we are going to successfully imitate biology we
> must model the various neurotransmitters.*


That is not my intuition. I see nothing sacred in hormones, I don't see the
slightest reason why they or any neurotransmitter would be especially
difficult to simulate through computation, because chemical messengers are
not a sign of sophisticated design on nature's part, rather it's an example
of Evolution's bungling. If you need to inhibit a nearby neuron there are
better ways of sending that signal then launching a GABA molecule like a
message in a bottle thrown into the sea and waiting ages for it to diffuse
to its random target.

I'm not interested in brain chemicals, only in the information they
contain, if somebody wants  information to get transmitted from one place
to another as fast and reliablely as possible, nobody would send smoke
signals if they had a fiber optic cable. The information content in each
molecular message must be tiny, just a few bits because only about 60
neurotransmitters such as acetylcholine, norepinephrine and GABA are known,
even if the true number is 100 times greater (or a million times for that
matter) the information content of each signal must be tiny. Also, for the
long range stuff, exactly which neuron receives the signal can not be
specified because it relies on a random process, diffusion. The fact that
it's slow as molasses in February does not add to its charm.

If your job is delivering packages and all the packages are very small, and
your boss doesn't care who you give them to as long as they're on the
correct continent, and you have until the next ice age to get the work
done, then you don't have a very difficult profession.  Artificial neurons
could be made to communicate as inefficiently as natural ones do by
releasing chemical neurotransmitters if anybody really wanted to, but it
would be pointless when there are much faster, and much more reliable, and
much more specific ways of operating.

John K ClarkSee what's on my new list at  Extropolis

kuh

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv089oC%3DAc-DswW5simNfWzQsGAZADjusaWOacE4M6kt9g%40mail.gmail.com.


Re: The connectome and uploading

2023-03-14 Thread Telmo Menezes
This is very nice. I did some stuff with the previously available complete 
connectome (C. elegans).

But:

Am Di, 14. Mär 2023, um 12:05, schrieb John Clark:
> One of the authors of the article says "*It’s interesting that the 
> computer-science field is converging onto what evolution has discovered*", he 
> said that because it turns out that 41% of the fly brain's neurons are in 
> recurrent loops that provide feedback to other neurons that are upstream of 
> the data processing path, and that's just what we see in modern AIs like 
> ChatGPT. 

I do not think this is true. ChatGPT is a fine-tuned Large Language Model 
(LLM), and LLMs use a transformer architecture, which is deep but purely 
feed-forward, and uses attention heads. The attention mechanism was the big 
breakthrough back in 2017, that finally enabled the training of such big models:

https://proceedings.neurips.cc/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html

Recurrent networks have been tried for decades precisely because of their 
biologically plausibility, but they suffer from the "vanishing gradient" 
problem. In simple terms, recurrence means that an input from a long time ago 
can remain important, but it becomes increasingly hard for gradient descent 
algorithms to assign the correct importance to the weights. So in this case, 
the breakthrough was achieved by moving away from biological plausibility.

I think that part of the reason for this is that although neural network 
topology is biologically inspired, the dominant learning algorithms are 
centralized top-down (gradient descent). Learning algorithms in our own brain 
are certainly much more decentralized / emergent / distributed. I do not think 
we cracked them yet. I imagine recurrent NNs will be back once we do. My 
intuition is that if we are going to successfully imitate biology we must model 
the various neurotransmitters. There is a reason why we have several of them 
(and all sorts of drugs that imitate them and can bind selectively). This 
contrasts with the "single signal type" approach of contemporary artificial NNs 
-- which is very handy because it really fits linear algebra and thus GPU 
architectures.

Telmo

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/6b5aa8e7-9091-467f-9ca6-3d2fbc15e644%40app.fastmail.com.


The connectome and uploading

2023-03-14 Thread John Clark
In the current March 10 2023 issue of the journal Science there is a report
that for the first time the entire connectome of an insect brain (the fruit
fly Drosophila melanogaster) has been found, all 3,016 neurons and 548,000
synapses of it, they also found there were 93 different types of neurons.
That connectome is an order of magnitude larger than has been possible to
find before. One of the authors of the article says "*It’s interesting that
the computer-science field is converging onto what evolution has discovered*",
he said that because it turns out that 41% of the fly brain's neurons are
in recurrent loops that provide feedback to other neurons that are upstream
of the data processing path, and that's just what we see in modern AIs like
ChatGPT. He also said that the method they used was slow and very labor
intensive but he also said “*Now that we have a reference brain one can now
use it to train machine learning to do it much faster*”. They also found
that "*Although the details of brain organization differ across the animal
kingdom, many circuit architectures are conserved *".The implications this
has for human uploading are obvious.

The connectome of an insect brain


John K ClarkSee what's on my new list at  Extropolis

iyu

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv2WUBR%2BuQE1dY5Pc%2B436RSmS9bzZCUYdh0LXubENwopxA%40mail.gmail.com.


ChatGPT's rebuttal to Chomsky

2023-03-14 Thread John Clark
Brent Meeker weote:

*> So there were 4e3,000,000,000 possible genomes of that length.*


Yes and that number is vastly greater than the number of atoms in the
observable universe, so nobody thinks there is only one way to make
intelligence, so there must be an astronomical number of ways to arrange
those 3 billion base pairs that would result in a human that was superior
to every other human being who has ever existed in intelligence or kindness
or strength or health or beauty or any other criteria you care to name. And
there must be an astronomical number of ways to arrange those 3 billion
base pairs that would produce a being who is not a human but was superior
to even the Greatest Theoretical Possible Human.

*> Evolution found an intelligent one pretty quick.*


I would not say 3 1/2 billion years to produce intelligence was "pretty
quick", but I admit the wiring diagram of a human brain is far more
complicated than that of a modern microprocessor, but I think all those
wheels within wheels and pasted on bells and whistles are a sign of
weakness not of strength. The difference is just what you would expect
between something that came about through random mutation and natural
selection and something that came out of the mind of an intelligent human
engineer.

And there are other examples of Evolution's poor design abilities: in the
eye of any vertebrate animal, the blood vessels that feed those cells and
the nerves that communicate with them are not in the back of the eye as
would be logical but at the front so light must pass through them before
the light hits the light sensitive cells, this makes vision less sharp than
it would otherwise be and creates a blind spot right in the middle of the
visual field. No amount of spin can turn that dopey mess into a good
design, a human engineer would have to be dead drunk to come up with a
hodgepodge like that.

John K ClarkSee what's on my new list at  Extropolis

uty

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv37wav0LA9UDizgMyry5NQJ2c7h_vY%3DiJ4HXh3tTvQ6yQ%40mail.gmail.com.