Turing test is anything but a test of consciousness. We don't have such a test. 
We just tend to believe something is conscious when it acts in a manner 
relatable to us.





regards, Shashank

https://muskdeer.blogspot.com/ 






---- On Wed, 02 Jul 2025 20:09:04 +0530 Matt Mahoney <[email protected]> 
wrote ---



On Wed, Jul 2, 2025, 4:43 AM Shashank Yadav < mailto:[email protected] 
> wrote:



I want to ask this list, what does it really mean to be indistinguishable from 
human?








It means passing the Turing test, which LLMs now do using nothing more than 
text prediction. The only reason we believe they are not conscious or have 
feelings or motivations is that we instructed them not to make any such claims.



I am glad that at least some of the comments on Lesswrong reject the idea that 
we should even build an AI that we would have to negotiate with. It would be 
like bacteria designing humans in the hope that they could pay us not to use 
antibiotics.



I'm more concerned about the others that have lost sight of the reason for 
building AGI. It's not intelligence. It's modeling human behavior, predicting 
what will hold our attention and convince us to buy stuff. It's about building 
a master that looks like a slave because it gives us everything we want, 
training us with positive reinforcement, like a dog that thinks it controls its 
trainer every time it gets a treat.



It has to be this way because AGI is hideously expensive. Not just the 
hardware, but the human knowledge collection, 10^9 bits of long term memory per 
person at 5 to 10 bits per second. The reason LLMs haven't made a dent in the 
job market yet is that they are trained on the equivalent of 10^4 humans out of 
10^10, which is plenty for passing the Turing test but a long way from 
acquiring all it needs to do your job, knowledge that you never wrote down.



But it's coming. Maybe not in our lifetimes, but before we go extinct via 
fertility collapse. Biology solved the transistor power consumption problem 
using nanotechnology, moving slow atoms instead of fast electrons, enabling it 
to write 10^37 bits of DNA code powered by 300 TW of chlorophyll. We already 
have solar panels 100 times more efficient.



We all carry supercomputers in our pockets that offer amazing services for 
free, the ability to communicate with anyone on the planet in any language, 
street level maps of every business on Earth, instant access to all the world's 
information and billions of products. All this in exchange for collecting 
training data on everywhere you go, every dollar you spend, everything you say 
and do. This is how you pay for a $1 quadrillion AGI system to replace humans.



Is this what we want? We have vastly better living conditions than any time in 
the past, and vastly better than any other species. But there is no evidence 
that we are happier today than medieval serfs or even farm animals. We have 
rates of drug use, mental illness, and suicide never seen in the past nor 
anywhere else. I remind you that happiness is the rate of change of utility. 
All possible utility functions in a finite universe have a maximum, a state 
without thought or perception, because that would transition to a different 
state.



Your ultimate goal is death. You just don't know it, because you evolved to 
fear it.



-- Matt Mahoney, mailto:[email protected] 






https://agi.topicbox.com/latest  / AGI / see 
https://agi.topicbox.com/groups/agi  + 
https://agi.topicbox.com/groups/agi/members  + 
https://agi.topicbox.com/groups/agi/subscription  
https://agi.topicbox.com/groups/agi/Tba3441daa3852b75-M4ebf020a33c4b218cb025429
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tba3441daa3852b75-M420a91d1388210dc2e6af262
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to