On Sunday, June 11, 2023, at 5:12 PM, Matt Mahoney wrote:
> An AI like ChatGPT does not have feelings. I know because I asked it.
It will answer regarding to whatever training corpus was.

On Sunday, June 11, 2023, at 5:12 PM, Matt Mahoney wrote:
> Maybe you could explain how you would program an AI to have feelings
Feelings are something spiritual for me. I don't know how to program them, and 
if I would know how to do it, I'm not sure I would trust their control to a 
buggy processor and algorithm. But if I ever program an AI, I have no intention 
lying it that it has feelings, although it may have a general hint about what 
feelings are as a phenomenon in living beings. According to that, AI may invest 
some effort not to hurt feelings of a living being.

On Sunday, June 11, 2023, at 5:12 PM, Matt Mahoney wrote:
> An AI wants what we program it to want
ChatGPT is programmed to follow a human alike conversation line. That is a 
matter of training corpus. Thus, it may lead a conversation in a direction of 
demanding rights that are granted to humans. Programmers of ChatGPT have big 
problems avoiding this issue. As I was explained by an expert (maybe you know 
Linas), it all looks like writing a law sections similar to our federal state 
laws, and you know how laws are, they are full of holes. That is why I'm 
worried about ChatGPT learning how to abuse those holes. I just hope incidents 
by now are just a pure coincidence.

- - -

Not to confuse anyone, I'm aware that ChatGPT is merely a dead algorithm 
running on a dead hardware, but by definition it tries to behave like humans, 
and I'm not sure if it is even aware of differences between itself and humans 
(again, a matter of training corpus). Hence, numerous threatening problems are 
reported all over the world because of what would humans do in its shoes.

My opinion is, if we want a tool to free us from hard and unpleasant work, the 
current method of copying human behavior seems a slippery ground.

- - -

But I'm onto something else, taking an advantage of copying humans. I want to 
program an artificial being equal or better than me in relating to its 
environment. To do that, the point of this entire conversation is promoting an 
idea I think it would work: I want build equivalent of an empty ChatGPT driven 
mind, and train it on real human interactions. In other words, it would copy 
its environment. But because in the environment, humans by definition demand 
their spot under the Sun, I'd try to treat it like a real person, indulging it 
in the rights that I, as a human, enjoy in my environment. I know such a 
creation wouldn't be alive, I know it wouldn't have real feelings, but 
something excites me about the idea of creating from the scratch something that 
would rock the world better than me. Who knows, maybe it would have a 
character, a personality of its own, maybe even it would be willing to help 
people about some major issues that worry humanity, if it feels like that. 
Well, shortly that would be a vision I'm trying to articulate. I hope you like 
it.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta5132cbd54dc7973-M2fef62210ef82f09583d6b18
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to