they are not the quickest path to AGI. I believe that having a design which has a number of well-thought-through restrictions with designed-in obsolescence (so that the AGI can easily become more generalized after a working foundation is built) is the most effective route to AGI.

And before I get hammered -- No, this is *NOT* equivalent to a belief that narrow AI will eventually grow to AGI. Narrow AI applications all have far too many *required*, unremovable restrictions built into them that they will collapse without. This is more equivalent to "training wheels" on a bicycle or a scaffolding that is used to construct a building.

----- Original Message ----- From: "Mark Waser" <[EMAIL PROTECTED]>
To: <agi@v2.listbox.com>
Sent: Saturday, May 05, 2007 12:21 PM
Subject: Re: [agi] Trouble implementing my AGI Algorithm


>I do not believe that the algorithm must be more complex. The more >complex the algorithm, the more ad hoc it is. Complex algorithms are not >able to perform generalized tasks. I believe the reason that n-digit was >a failure was because there is no vision system, NOT because the >algorithm is too simple.

Fundamentally, there is always a trade-off of flexibility/freedom vs.complexity/control vs. speed. The real question is what trade-off values will work and quickly allow you to get to a system where you can relax some of your initial restrictions. My personal intuition/opinion (which I can't prove) is that many (if not the majority) of people on this list are trying for solutions that are *too* general. I believe that these too general solutions can (probably) work eventually (given enough computing power) but they are not the quickest path to AGI. I believe that having a design which has a number of well-thought-through restrictions with designed-in obsolescence (so that the AGI can easily become more generalized after a working foundation is built) is the most effective route to AGI. Of course, I could also be seriously wrong and find it impossible to remove a restriction that then prevents AGI -- but that's the route I'm taking. :-)

I know that the database has to remember pain and pleasure for stimuli. But I have difficulty making a "fuzzy" database representation, even for some subfields.

Fuzziness can mean different things to different people and the best forms of fuzziness are extremely hard to design and most often suffer *seriously* from the unconscious assumptions of the creator. I'm afraid that you're going to have to give far more detail before we'll have a clue of what you're asking.


----- Original Message ----- From: "a" <[EMAIL PROTECTED]>
To: <agi@v2.listbox.com>
Sent: Friday, May 04, 2007 6:08 PM
Subject: Re: [agi] Trouble implementing my AGI Algorithm


I do not believe that the algorithm must be more complex. The more complex the algorithm, the more ad hoc it is. Complex algorithms are not able to perform generalized tasks. I believe the reason that n-digit was a failure was because there is no vision system, NOT because the algorithm is too simple. Because the algorithm searches the database recursively, I believe that my simple algorithm can perform any computation (trained by operant conditioning). The failure for n-digit addition was because there are no "eyes" that can move to concentrate on each digit.

The database is remarkably similar to the human brain. It can learn easily by only remembering the difference between the external stimuli with a similar stimuli "remembered" in the database. Therefore, the algorithm compress the learned knowledge efficiently. Pattern recognition and abstract reasoning is also easy because of the incremental learning.


I am having trouble with the fuzzy "database" representation. So it's best to test the algorithm in a specific subfield (like n-digit addition) and then generalize it into real-world tasks.

In general, my algorithm behaves like the brain of an animal. Animals learn by operant conditioning and are also difficult to teach them multiple digit addition.

I believe that the environment must be fuzzy in order for the operant conditioning method to work.

I know that the database has to remember pain and pleasure for stimuli. But I have difficulty making a "fuzzy" database representation, even for some subfields.

----- Original Message ----
From: Mark Waser <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Thursday, May 3, 2007 5:06:33 PM
Subject: Re: [agi] Trouble implementing my AGI Algorithm

Interesting e-mail. I agree with most of your philosophy but believe that the algorithm you are requesting is far, far more complex than you realize.

Is there any particular reason why you're remaining anonymous?

----- Original Message ----- From: "a" <[EMAIL PROTECTED]>
To: <agi@v2.listbox.com>
Sent: Thursday, May 03, 2007 4:57 PM
Subject: [agi] Trouble implementing my AGI Algorithm


Hello,

I have trouble implementing my AGI algorithm:

The below paragraphs might sound ridiculous, because they are my original
ideas.

We are all motivated by selfish thoughts. We help others so others can
help us back. We help others to cope
with our pleasurable chemical addiction. We help others because
helpfulness is encoded in our genetic markup.

We experience pain. Pain is to help us defend damage. When we touch
something hot we can draw back. But we
have the free will to not react to it. I believe there is no free will.

I will explain what I means. Assume that pain is a constraint. But this
constraint is not absolute. Other
thoughts can override the constraint. For example, when you help some
animal being eaten from a monster, you
can fight with the monster to save the

animal's life. But you will experience pain in the fight. Therefore pain
is not a constraint. Your goal to save the animal's
life overrides the pain constraint. (your goal to save the

animal's life is also motivated by selfish actions) Therefore, pain is not a constraint. But if there is no goal that overrides the pain constraint,
you will do anything to avoid the pain. We have proven there is no free
will--we choose to react or not react to pain is dependent on your goal or our knowledge. Therefore, implementing pain as a constraint in friendly AI
will not help many lives. Our brains are doing things to get the highest
pleasure as possible. We get a chemical addiction to save that animal.
That pleasure is more pleasant than avoiding the pain by not fighting. We
trust ourselves. We can gamble pain for future pleasure. Therefore, I
believe that emotion can be implemented by an ordinary computer. Emotion
can be implemented by an algorithm that searches for the highest pleasure.
The algorithm must also has the ability to gamble pain for pleasure (by
applying "goals" or knowledge). There is no right or wrong. We kill
insects all the time. But we usually do
not sympathize with them. This is because that our "religion" says that
bugs are not as important as other animals. It's
a byproduct of natural selection. We have to hunt animals to survive.

Without religion, we would brood over this question: Is it better to save
a human by sacrificing 1000 insects
or vice versa?

Therefore we assume that religion is natural. Religion helps us survive.
Some religions help us believe there
is afterlife and reincarnation. Because we believe these, we do not fear
death. We are not afraid to
sacrifice ourselves for others. For example, we will not be afraid to
participate in wars and spread our
religion. Religion is a virus. Most of the world is religious because of
that.

Therefore, some religions are dangerous. But religion is essential for our
daily survival. Some religious
thoughts are encoded in our genes.

It's a process of natural selection. Kin selection and group selection are
examples. Returning to the main question: Is selfishness essential for
friendly AI? Selfish is related to laziness. Lazy people do not like to
sacrifice hard work for pleasure (or they do not enjoy pleasure). They do not like to sacrifice their energy for pleasure. Contrastingly, AI can use as much energy as it wants. They do not get tired. Pain is using "energy". But what about these feelings of people? Friendly AI will get pleasure if it sees the people happy. For example, many people are afraid of AI, even
friendly AI. The friendly AI computer will self-destruct so these people
will not worry about AI. The AI computer has to maintain at least a little
superiority on oneself to prevent self-destruction. It's
a natural instinct.

But the last paragraph is contradictory. Will the computer self-destruct
to get pleasure? We will guess:
selfish friendly AI might not. Unselfish friendly AI might (depends on
knowledge and circumstances).

This is where religion takes over. If the selfish friendly AI believes in
an afterlife, it might self-
destruct on some circumstances. The selfish friendly AI might experience
pleasure during self- destruction.
The selfish friendly AI might otherwise (depending on religion) set a goal
that it will experience pleasure
after it is self- destructed.

However, the friendly AI will be smart enough to figure out, for example,
that there is no such thing as an
afterlife and religion. What do we do about it? What do we do when it
figures out that all organisms are
equally superior?

Therefore, I believe that selfish AI might be less "risky" than unselfish
AI. Unselfish AI might treat
everything equally; it might sacrifice humans to save animals.

To choose the "safest" route, we need an AI that behaves like a human. For
example, if humans are motivated
by selfish goals, then friendly AI has to be motivated by selfish goals.
We need an AI to be taught by a top-
down method rather than a bottom-up approach, like humans.

How do we make the selfish friendly AI algorithm? We have an obvious
requirement: lots of

heuristics (like pleasure, pain).

It's the same for humans. The heuristics for humans are encoded in our
genetic code. Because the human brain computes concurrently, the algorithm
is slower on a computer. But evolution isn't
perfect - an optimized algorithm might be much faster. Contrary to the
popular opinion, I do not think
computer speed is a requirement. Any computer will get anything done. It
is just a matter of time.

It is basically a brute force algorithm that searches for the highest
amount of pleasure. It is like a chess
program. And because emotion is vital for real-world tasks and perhaps
generalized intelligence; a selfish
friendly AI algorithm is essential to construct artificial general
intelligence.

But to recognize emotion of a person, we sometimes have to pretend we are
that person. Theories suggest that
"mirror neurons" perform empathy. But computers, and also theories suggest
that autistic people do not have
"mirror neurons". We have to find a way to emulate empathy: that is --
using the selfish friendly AI
algorithm.

How do we implement the algorithm? It is a difficult question. There are
many ways to implement it.

My implementation: Knowledge is stored in a fuzzy "database". The
algorithm searches through the
entire database every time in response to external (and internal) stimuli,
looking for connections or
relations (relating to the stimulus). It recursively searches. Then, it
chooses the most pleasurable ("goal")
action to be performed (from knowledge stored in the database).

I believe that behaviors of this implementation can be easily trained by
operant conditioning. The training
has to "gamble pain for pleasure". It has to get an immediate reward. But I don't know how to train the implementation for more complex tasks, like
arithmetic. Single-digit addition is easy, but how do I generalize it to
double digit addition? I find it hard to reduce the two-digit number to
two discrete digits and add them. Similarly, autistic people seem to have
trouble in this similar area. Pattern recognition might help, but it is
too complex.

Help me with the algorithm. Thank you


__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around
http://mail.yahoo.com

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;;



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;




__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around
http://mail.yahoo.com

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to