We can usually distinguish AI from humans because the AI is faster,
smarter, and more helpful. Turing was aware of the problem in 1950 when he
gave an example of waiting 30 seconds to give the wrong answer to an
arithmetic problem. A real AI won't do this unless it is to deceive people.
When I asked ChatGPT the same questions as in Turing's paper, it answered
the addition problem correctly and immediately and wrote a nice sonnet on
the Forth Bridge, but it couldn't solve the chess problem even when I used
modern notation.

I just saw another long post on Lesswrong defending the old "foom and doom"
singularity model that's now out of fashion. You know, where an AI in a box
self improves to godlike powers, escapes, and turns the solar system into
paperclips. Now we know the problem isn't uncontrolled AI. It's AI
controlled by billionaires giving us everything we think we want. When
SIAI/MIRI was founded, nobody thought about fertility collapse or social
isolation when people prefer interacting with AI over humans.

-- Matt Mahoney, [email protected]

On Fri, Jul 4, 2025, 6:57 AM Shashank Yadav <[email protected]>
wrote:

> You may be one nihilist AI with some training inconsistencies (but I have
> strong suspicions that you're human).
>
> Besides, bot or not, in cyberspace all agents are Users first - there is a
> only a standardized machine-friendly digital interface to interact using,
> which is a bot oriented environment - in which alone I don't think the
> consideration of complete indistinguishably between human and artificial
> intelligence can be made.
>
>
> regards, Shashank
> The task is not impossible. <https://muskdeer.blogspot.com/>
>
>
>
>
> ---- On Fri, 04 Jul 2025 15:30:47 +0530 *Matt Mahoney
> <[email protected] <[email protected]>>* wrote ---
>
> How do you know that I'm not an AI?
>
> -- Matt Mahoney, [email protected]
>
> On Fri, Jul 4, 2025, 4:39 AM Shashank Yadav <[email protected]>
> wrote:
>
>
> And AI can pass the Turing test already. So what? Outside the test
> conditions there is no indistinguishably from humans. That is where my
> question came from.
> Passing the test only means a great deception of human sociality - and we
> (seeking a conscious agency and meaning in everything) do love to deceive
> ourselves - treating it we might a conscious human that we've to compensate
> to be safe from.
>
>
> regards, Shashank
> The task is not impossible. <https://muskdeer.blogspot.com/>
>
>
>
>
>
> ---- On Fri, 04 Jul 2025 06:50:48 +0530 *Matt Mahoney
> <[email protected] <[email protected]>>* wrote ---
>
> On Wed, Jul 2, 2025, 8:00 PM Shashank Yadav <[email protected]>
> wrote:
>
>
> Turing test is anything but a test of consciousness. We don't have such a
> test. We just tend to believe something is conscious when it acts in a
> manner relatable to us.
>
>
> Of course. Turing discussed consciousness in section 6.4 of his 1950
> paper. It is irrelevant to winning the imitation game.
> https://courses.cs.umbc.edu/471/papers/turing.pdf
>
> Of course he was right. The question of whether machines can think has
> been debated endlessly, even after Turing defined a test that makes the
> question irrelevant. That turned out to be correct when the test was passed
> 72 years later.
>
> We have senses of consciousness, qualia, and free will. These feelings
> originate from internal positive reinforcement of computation, input, and
> output, respectively. These signals evolved to motivate us to not lose them
> by dying, which results in more offspring.
>
> -- Matt Mahoney, [email protected]
>
>
>
>
>
>
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/Tba3441daa3852b75-M03b0f8c6a790c314fe9855e2>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tba3441daa3852b75-M306f79c4e00d5f8bdc787831
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to