Robin <mixent...@aussiebroadband.com.au> wrote:

> For example, if asked "Can you pour water into
> >   a glass made of sugar?", ChatGPT might provide a grammatically correct
> but
> >   nonsensical response, whereas a human with common sense would recognize
> >   that a sugar glass would dissolve in water.
>
> ....so where did it get this example from, or all of these points for that
> matter?
>

Probably from a human author writing about AI. I asked it in a follow up
question:

ME: So, how would you respond to the question: "Can you pour water into a
glass made of sugar?"

ChatGPT: As an AI language model, I can provide a grammatically correct
response based on my training data. However, my response may not reflect
common sense or practicality in the real world.

If I were to respond based on my training data, I might say something like:
"Yes, you can pour water into a glass made of sugar, but the glass will
dissolve." However, it's important to note that this is not a practical or
sensible thing to do in reality because the sugar glass would quickly
dissolve and the water would spill out.

It's worth mentioning that humans have common sense and contextual
knowledge that allow us to understand the impracticality of pouring water
into a glass made of sugar. This is a good example of a situation where AI
language models like myself may struggle to understand and respond in a way
that reflects human common sense and practicality.

This response may reflect the earlier part of the conversation.


>
>
>   It cannot draw on
> >   personal experiences, general knowledge, or reasoning to inform its
> >   responses in the way a human would.
>
> General knowledge can be obtained from the Inet. "reasoning" is clearly
> not true. Without it, one cannot reply to a
> sentence.
>

I assume you mean "the Internet." ChatGPT can scrape content from human
authors on the internet, but it has no reasoning. It literally does not
know what it is talking about. In the same sense that a bee making a nest
or directing other bees to a source of nectar does not know what it is
doing. The bee is acting by instinct with no planning or awareness. ChatGPT
is acting by programming with no plan or awareness. That is why it cannot
tell the difference between reality and what are now called
"hallucinations" (fake information invented by ChatGPT).


>   world. It cannot perform physical tasks like walking, manipulating
> objects,
> >   or performing surgery, which are essential for many real-world
> applications.
>
> There are already robots that perform these things. They require only
> programming to interact with the real world....and
> many already have Inet connectivity, either directly or indirectly.
>

When these robots are controlled by advanced AI in the future, they may
approach or achieve AGI partly because of that. ChatGPT is not saying that
AGI is impossible; she is saying that some kind of robotic control over
physical objects is probably a necessary component of AGI, which she
herself has not yet achieved.



> >   5. Lack of self-awareness: ChatGPT does not have the ability to reflect
> >   on its own thoughts, actions, or limitations in the way that a
> self-aware
> >   human being can. It cannot introspect, learn from its mistakes, or
> engage
> >   in critical self-reflection.
>
> ....AutoGPT?
>

Not yet.


The point I have been trying to make is that if we program something to
> behave like a human, it may end up doing exactly
> that.


The methods used to program ChatGPT and light years away from anything like
human cognition. As different as what bees do with their brains compared to
what we do. ChatGPT is not programmed to behave like a human in any sense.
A future AI might be, but this one is not. The results of ChatGPT
programming look like the results from human thinking, but they are not.
The results from bee-brain hive construction look like conscious human
structural engineering, but they are not. Bees do not attend MIT.

Reply via email to