On Monday, June 24, 2024, at 1:16 PM, Matt Mahoney wrote:
> By this test, reinforcement learning algorithms are conscious. Consider a 
> simple program that outputs a sequence of alternating bits 010101... until it 
> receives a signal at time t. After that it outputs all zero bits. In code:
> 
> for (int i=0;;i++) cout<<(i<t & i%2);
> 
> If t is odd, then it is a positive reinforcement signal that rewards the last 
> output bit, 0. If t is even, then it is a negative signal that penalizes the 
> last output bit, 1. In either case the magnitude of the signal is about 1 
> bit. Since humans have 10^9 bits of long term memory, this program is about 
> one billionth as conscious as a human.

Reminds me of this:

"A lone molecule of tryptophan displays a fairly standard quantum property: it 
can absorb a particle of light (called a photon) at a certain frequency and 
emit another photon at a different frequency. This process is called 
fluorescence and is very often used in studies to investigate protein 
responses."

Is your program conscious simply as a string without ever being run? And if it 
is, describe a calculation of its consciousness.

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T6510028eea311a76-M74977b3fe00cfa753914fa46
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to