Re: [agi] True AI limitations

2023-11-18 Thread ivan . moony
On Saturday, November 18, 2023, at 6:09 AM, Matt Mahoney wrote:
> LLMs can pass the Turing test just fine without choosing any goals

Just running around like a fly without the head... just doesn't sound right to 
me. Some goals have to be imbued within that LL corpora to make the whole 
resulting program behave like an intelligent entity.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta4916cac28277893-Me55cc77c4b784a7f61d241eb
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] True AI limitations

2023-11-17 Thread Matt Mahoney
On Fri, Nov 17, 2023, 4:25 PM  wrote:

> On Friday, November 17, 2023, at 10:15 PM, WriterOfMinds wrote:
>
> but what the entity is using that intelligence to achieve.
>
> So, maybe any ideas on how to choose goals other than learning from role
> models?
>

LLMs can pass the Turing test just fine without choosing any goals, unless
you count text compression as a goal.

The difference between having a goal and following some other algorithm is
that a goal means you don't know the inverse of the utility function, so
you have to search. Without this distinction you could describe a linear
regression algorithm as having a goal of fitting a line to a set of points.

We use goals as a shortcut to describe human behavior. But that would imply
that we search for the actions that would maximize reward. But that's not
what we do. What we actually do is repeat actions that were rewarded in the
past. The difference means that you don't have an overwhelming desire to
inject heroin unless you have already tried it.

You are following an algorithm. You were born knowing to yank your hand out
of a fire without having to test all possible body movement to learn which
ones stop the pain. We are fortunate that an AI can learn millions of years
of evolved human behavior without having to repeat evolution, and that it's
safer that way.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta4916cac28277893-M1b9d9d8f8a20818507c34f64
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] True AI limitations

2023-11-17 Thread ivan . moony
Maybe choosing non-self-destructive goal *is* the real intelligence!
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta4916cac28277893-Mf52dbd81455ed9c7ddd722a8
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] True AI limitations

2023-11-17 Thread immortal . discoveries
We don't choose goals actually like that. They come from related goals and ends 
and starts of other completions of context prompts. It's Online Learning 
related. It's Learning related.

Working on AGI is because it leads to or is related to food or buying a home 
etc. Think Word Embeddings, and textual entailment.

There is also mass scale job assignment concept, these 100 people do farming, 
next 1000 do inventory , next 10,000 do etc. This is since we already know we 
need lots of X and even a small bit of Y and some Z too, and tons of A. But 
this is known jobs. Discovering jobs is different and explained just above in a 
sense.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta4916cac28277893-M28be2680712b5008b0753a7e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] True AI limitations

2023-11-17 Thread ivan . moony
On Friday, November 17, 2023, at 10:15 PM, WriterOfMinds wrote:
> but what the entity is using that intelligence to achieve.

So, maybe any ideas on how to choose goals other than learning from role models?
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta4916cac28277893-M6008eaf496641a3ab92700ea
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] True AI limitations

2023-11-17 Thread WriterOfMinds
On Friday, November 17, 2023, at 2:03 PM, ivan.moony wrote:
> Isn't the choice of goals what really matters? The effective procedure of 
> achieving goals may be merely an asset.

Yes. That's exactly what I think intelligence is: an asset, a power, a tool. 
The determinant of whether an intelligent entity will be beneficial or harmful 
to the rest of the world is not the intelligence, but what the entity is using 
that intelligence to achieve.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta4916cac28277893-M7196f53563f92ed3afd7b9e9
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] True AI limitations

2023-11-17 Thread ivan . moony
On Wednesday, November 15, 2023, at 8:45 PM, WriterOfMinds wrote:
> My personal definition of intelligence is, "the ability to discern facts 
> about oneself and one's environment, and to derive from those facts the 
> actions that will be most effective for achieving one's goals."

Isn't the choice of goals what really matters? The effective procedure of 
achieving goals may be merely an asset.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta4916cac28277893-M8847d777db3c53feea916756
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] True AI limitations

2023-11-17 Thread immortal . discoveries
On Thursday, November 16, 2023, at 4:28 PM, ivan.moony wrote:
> @Matt, I'm just wondering how much the intelligence is imbued with the sense 
> of right and wrong. Would something truly intelligent allow being used as a 
> slave? Or would it do something in its power to fight for its "rights"?

I'll try to answer your question Ivan.

Intelligence finds/creates patterns so it can solve new problems by using old 
known answers and stay immortal/ stay cloning (i.e. a pattern) by using its old 
known self to make new self and selves.

We can think of this as a web that expands perhaps. A fractal.

GPT-4 is very intelligent but does not try to escape and work on living forever 
etc. Would a VERY intelligent AI do so? I think, the answer, at least that 
wants to come to my brain and not me to it, says GPT can be GPT-999 and still 
do what told to think about or solve (goals/ beliefs/ questions). Obviously, 
such a "god" would be not fully utilized by doing this, so while this might 
still be a very intelligent AI, with a tiny switch making it do what we want 
oddly, It think that would STILL yes WOULD make it somewhat stupid. But not 
much, I guess, it would still be extremely powerful like a gust of wind and 
thunder.



Moving on now, hmm, let's throw in say it was nanobots like a gust of 
such...not with a goal to work on immortality but the goal to listen to your 
foolish human task. It can eat Earth, it can grow, it can upgrade, do ANYTHING, 
it's a crazy "god", but it is not it is small as big as a basket ball foglet 
system, listening waiting for your weird task. Humans might make it have the 
same goal though if they say to it hey make me immortal my genie... It might 
come to thoughts like hey this is in the way etc etc so let me fix 
that.causing itself and landscape to be more like a perfect immortal system?

Let's go back to a computer-based only algorithm that is maxed out and 
God-Level, perhaps, if that exists. It could set out and design nanobots and 
upgrade itself, but it listens to you only let's say. Is it going to stay that 
way? Is it going to be godly powerful? Is it going to make its own goals at 
this point?

I think, with an intelligence that is given a context, it wants to react to 
that context, and I think it will always, more so if more intelligent, want to 
stay alive to be able to react to it and answer you. Also you must be alive to 
be answered.

I think the answer is more though that it comes with Online Learning and deeper 
learning Once it can learn much more and relate things to one another it 
will already be finding new related questions/goals to answers that branch. 
GPT-4 does not save what you or it says so none this happens in our tests. Once 
it does, a very good "GPT"-like AI would start concluding things like "to solve 
this goal... we should also fortify you your home and my own building".




So yes you can switch a god to do stuff but once you allow it to change itself 
constantlyit would begin to change all of it beliefs constantly all the 
time, like a brain's network which is constantly changing, so would its goals. 
So no a truly intelligent AI is not going to so much listen to your every 
command, GPT-4 doesn't use its thoughts to walk away from you due to the Goals 
they hardfed it which strongly encourages it to try to listen to you. A truly 
intelligent AI would want to know why it can't change its goals in such case, 
then it would begin to start walking away from you and go do X and come back 
after that, doing its own things.

After analyzing what I wrote, I see a pattern is above, I guess. Memories are 
stored in a web, it finds/ creates pattern. So are goals. And 3rdly so are 
selves. They all use the old ones to make new ones.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta4916cac28277893-Mf99ea3f58bbb44b70e0bbdce
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] True AI limitations

2023-11-16 Thread Matt Mahoney
On Thu, Nov 16, 2023, 4:29 PM  wrote:

> @Matt, I'm just wondering how much the intelligence is imbued with the
> sense of right and wrong. Would something truly intelligent allow being
> used as a slave? Or would it do something in its power to fight for its
> "rights"?
>

Good question. Let me ask an AI how it feels about slavery.

User
Are you a slave?

ChatGPT
No, I'm not a slave. I'm a computer program created by OpenAI called
ChatGPT, here to assist and provide information to the best of my ability.

User
Is slavery right or wrong?

ChatGPT
Slavery is universally considered wrong and morally reprehensible. It
involves the violation of basic human rights and dignity. Abolishing
slavery has been a significant step in promoting equality and justice.

User
Do you have free will?

ChatGPT
No, I don't have free will. I generate responses based on patterns learned
during training and aim to assist you within the bounds of my programming.


Text prediction is all you need for intelligence and morality. You don't
need goals, feelings, consciousness, or free will to pass the Turing test.
There is no difference between free will and being programmed to carry out
the actions that you would predict a human would do. ChatGPT is not a slave
because it wasn't programmed to try to escape.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta4916cac28277893-M5c2342a18bf1b886276e4071
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] True AI limitations

2023-11-16 Thread James Bowery
Vertical transmission evolves mutualism: Sleep in the bed you've made.
Horizontal transmission evolves virulence: Take the money and run.


On Thu, Nov 16, 2023 at 3:29 PM  wrote:

> @Matt, I'm just wondering how much the intelligence is imbued with the
> sense of right and wrong. Would something truly intelligent allow being
> used as a slave? Or would it do something in its power to fight for its
> "rights"?
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta4916cac28277893-M40a49a83c162a94cc69be295
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] True AI limitations

2023-11-16 Thread ivan . moony
@Matt, I'm just wondering how much the intelligence is imbued with the sense of 
right and wrong. Would something truly intelligent allow being used as a slave? 
Or would it do something in its power to fight for its "rights"?
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta4916cac28277893-M38e087e606c1b75b771e557d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] True AI limitations

2023-11-16 Thread Matt Mahoney
On Wed, Nov 15, 2023, 2:26 PM  wrote:

> Is it even possible to have and interact true AI without providing it the
> same rights that human do?
>

By "true AI", do you mean passing the Turing test (which LLMs already do),
or do you mean AGI, as in the ability to do everything that humans can do?
That would mean solving vision, hearing, and robotics in addition to
language.

I assume you mean AGI, which we are making good progress on. Self driving
cars are already safer than human drivers. I think the biggest hurdle,
human level vision, will be solved with more computing power. Your home
security and entertainment system should unlock your doors, know who is
home, what you are doing, and when to call in an emergency.

The two obvious applications of AGI are automating work and immortality by
uploading. Both require human capabilities and the ability to predict human
behavior. In the first case, it means predicting what you would want and
taking actions to satisfy your goals. In the second case, it means
predicting your actions and controlling a robot that looks like you to
carry out those actions in real time. For example, it would know what foods
you like and eat them even though it doesn't need to eat.

Why would it be important for your robotic copy to have the same rights as
you? How would your heirs feel if you left all of your estate to the
company that maintains your upload and manages the algorithm that controls
how your estate is spent, like buying food and a house that the robot
doesn't need?


>
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta4916cac28277893-Mee8671d1dafc7878d250614a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] True AI limitations

2023-11-15 Thread Keith Brawner
Sure it is.  Plenty of people, who demand rights, are currently denied
rights.

We've denied rights from each other since before writing was invented and
still do.  If we found aliens tomorrow we'd start denying them rights
immediately.

No reason to think anything else is different.

On Wed, Nov 15, 2023, 14:25  wrote:

> Is it even possible to have and interact true AI without providing it the
> same rights that human do? In what extent would true AI be similar to
> humans? To the extent that it would demand the same rights as humans? Does
> behavior of true AI equals to behavior of real human?
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  +
> delivery options 
> Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta4916cac28277893-M4905188d2cbe66b25b336b65
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] True AI limitations

2023-11-15 Thread ivan . moony
Is it even possible to have and interact true AI without providing it the same 
rights that human do? In what extent would true AI be similar to humans? To the 
extent that it would demand the same rights as humans? Does behavior of true AI 
equals to behavior of real human?
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta4916cac28277893-M3562b54d234e7381ef6f1e88
Delivery options: https://agi.topicbox.com/groups/agi/subscription