I believe that symbolic or GOFAI will __not__ achieve AGI after tremendous 
researching. Therefore I invented this algorithm.



We do not want to construct AGI that performs all the complex
human-capable tasks from scratch. But we want to construct intelligence
step-by-step. We first have to imitate animal intelligence before
initiating more complex human intelligence. I believe the result will
be too ad hoc if complex human intelligence is constructed from
scratch. And the model will not be able to perform new, generalized
tasks, but only specific (narrow AI) tasks like specific, narrow areas
of vision and language recognition. 



Therefore, if we want to program AGI, the algorithm have to be as
simple as possible. In general, the more complex the algorithm, the
more "narrow" it is. As a guide to make the algorithm as simple as
possible, we have to first model "simpler" kinds of intelligence, like
animal intelligence. We have to first create an algorithm that imitates
animal intelligence and "extend" the algorithm to complex human
intelligence.



We are constantly subconsciously using our brain to find remembered
data similar to the stimuli. The algorithm must constantly runs to find
things external stimuli relating to the remembered data.



The representation format is defendant on the kind of sense. If the
sense is vision then the format must be represented in a format
compatible to vision, like pixels.



But in areas such as language, which is more "complex", we still have
to keep things as simple as possible. I believe that temporal memory is
not only essential for language learning, but also more general tasks. 



For a specific example, say, multiple digit addition, I believe that
temporal memory is required (with the "eyes" that move to concentrate
on each digit). The eyes move to concentrate on each digit. But time is
essential to decide which is the last digit and the decision to carry
or not. 



In every change in the environment (stimuli), the algorithm first
records that change in the database. Then, it scans the whole database
to find "records" similar to the environment (or change in
environment). 



In every record, it basically contains four fields: 

- Sensory data of the stimuli (data of the environment)

- Level of pain and pleasure of the stimuli

- Time the stimuli is recorded (or remembered)

- Action performed to get this stimuli (this is basically motor memory, which 
is basically sensory data)



Thus, for every change in the environment (stimuli), it finds "sensory
data" relating to the stimuli. It also finds some stimuli *near* the
"time of this stimuli recorded". Finding other stimuli, similar in
terms of time, of this stimuli recorded, is essential for training via
operant conditioning.



Next, as a simple example, we will implement this algorithm using the sense: 
vision.



We will use an array of pixels for vision. In every change in the
environment (stimuli), the array of pixels is first recorded into the
database. Then, we scan the whole database to find other "arrays of
pixels" similar to that stimuli (we can use a neural network for fuzzy
matching the array of pixels). Then, we find some stimuli occurring at
a similar time, relating to each "array of pixels" we found last. We
will then use an algorithm to choose a record with the highest pleasure
from the previous array of pixels. We will finally re-perform the
"Action performed to get this stimuli" from the previous record.



The last paragraph discusses the way an animal executes an action that
has ALREADY trained by operant conditioning. We will describe the
process of ACTUALLY TRAINING the action using operant conditioning in
the next paragraph.



Because the algorithm must automatically every change in environment
(stimuli), it might be easy to train it without intervention. This
algorithm is substantially equivalent to the way an animal's brain
functions. But it would be easier to just store the difference between
the stimuli and a stimuli recorded in the database, like incremental
learning. I hope that my algorithm will work.



Like I said, evolution isn't perfect, so you can use some special
optimizations with the algorithm. For example, in computational
linguistics, the algorithm can use a binary search to find records
relating to a word, instead of scanning the whole database. This
shortcut method is equivalent to a person finding the definition of a
Chinese character by typing the character in an image search (like
Google Image Search), to find its definition. But not every word has a
pictorial counterpart. And most natural languages are holistic--there
are some meanings that are solely understood when you literally
__read__ it. Therefore, be careful of specialized algorithms. But like
I said: simple algorithms are more generalizable; the more specialized
the algorithm, the more "narrow". The best solution for the computer to
perform faster is NOT implement hard-wire the optimization. It can
treat the algorithm as a external "tool". For example, people drive
cars to travel faster, but cars are not "hard-wired" into the human.
Another example is text-to-speech programs, for people who read too
slow.



I noticed that there is too much attention on certain "tools" for AGI;
and too little focus on my simple general algorithm for AGI. An animal
can rather use a calculator for adding multiple digit numbers.

----- Original Message ----
From: Mark Waser <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Saturday, May 5, 2007 12:21:48 PM
Subject: Re: [agi] Trouble implementing my AGI Algorithm

>I do not believe that the algorithm must be more complex. The more complex 
>the algorithm, the more ad hoc it is. Complex algorithms are not able to 
>perform generalized tasks. I believe the reason that n-digit was a failure 
>was because there is no vision system, NOT because the algorithm is too 
>simple.

    Fundamentally, there is always a trade-off of flexibility/freedom 
vs.complexity/control vs. speed.  The real question is what trade-off values 
will work and quickly allow you to get to a system where you can relax some 
of your initial restrictions.  My personal intuition/opinion (which I can't 
prove) is that many (if not the majority) of people on this list are trying 
for solutions that are *too* general.  I believe that these too general 
solutions can (probably) work eventually (given enough computing power) but 
they are not the quickest path to AGI.  I believe that having a design which 
has a number of well-thought-through restrictions with designed-in 
obsolescence (so that the AGI can easily become more generalized after a 
working foundation is built) is the most effective route to AGI.  Of course, 
I could also be seriously wrong and find it impossible to remove a 
restriction that then prevents AGI -- but that's the route I'm taking. 
:-)

> I know that the database has to remember pain and pleasure for stimuli. 
> But I have difficulty making a "fuzzy" database representation, even for 
> some subfields.

    Fuzziness can mean different things to different people and the best 
forms of fuzziness are extremely hard to design and most often suffer 
*seriously* from the unconscious assumptions of the creator.  I'm afraid 
that you're going to have to give far more detail before we'll have a clue 
of what you're asking.


----- Original Message ----- 
From: "a" <[EMAIL PROTECTED]>
To: <agi@v2.listbox.com>
Sent: Friday, May 04, 2007 6:08 PM
Subject: Re: [agi] Trouble implementing my AGI Algorithm


>I do not believe that the algorithm must be more complex. The more complex 
>the algorithm, the more ad hoc it is. Complex algorithms are not able to 
>perform generalized tasks. I believe the reason that n-digit was a failure 
>was because there is no vision system, NOT because the algorithm is too 
>simple. Because the algorithm searches the database recursively, I believe 
>that my simple algorithm can perform any computation (trained by operant 
>conditioning). The failure for n-digit addition was because there are no 
>"eyes" that can move to concentrate on each digit.
>
> The database is remarkably similar to the human brain. It can learn easily 
> by only remembering the difference between the external stimuli with a 
> similar stimuli "remembered" in the database. Therefore, the algorithm 
> compress the learned knowledge efficiently. Pattern recognition and 
> abstract reasoning is also easy because of the incremental learning.
>
>
> I am having trouble with the fuzzy "database" representation. So it's best 
> to test the algorithm in a specific subfield (like n-digit addition) and 
> then generalize it into real-world tasks.
>
> In general, my algorithm behaves like the brain of an animal. Animals 
> learn by operant conditioning and are also difficult to teach them 
> multiple digit addition.
>
> I believe that the environment must be fuzzy in order for the operant 
> conditioning method to work.
>
> I know that the database has to remember pain and pleasure for stimuli. 
> But I have difficulty making a "fuzzy" database representation, even for 
> some subfields.
>
> ----- Original Message ----
> From: Mark Waser <[EMAIL PROTECTED]>
> To: agi@v2.listbox.com
> Sent: Thursday, May 3, 2007 5:06:33 PM
> Subject: Re: [agi] Trouble implementing my AGI Algorithm
>
> Interesting e-mail.  I agree with most of your philosophy but believe that
> the algorithm you are requesting is far, far more complex than you 
> realize.
>
> Is there any particular reason why you're remaining anonymous?
>
> ----- Original Message ----- 
> From: "a" <[EMAIL PROTECTED]>
> To: <agi@v2.listbox.com>
> Sent: Thursday, May 03, 2007 4:57 PM
> Subject: [agi] Trouble implementing my AGI Algorithm
>
>
>> Hello,
>>
>> I have trouble implementing my AGI algorithm:
>>
>> The below paragraphs might sound ridiculous, because they are my original
>> ideas.
>>
>> We are all motivated by selfish thoughts. We help others so others can
>> help us back. We help others to cope
>> with our pleasurable chemical addiction. We help others because
>> helpfulness is encoded in our genetic markup.
>>
>> We experience pain. Pain is to help us defend damage. When we touch
>> something hot we can draw back. But we
>> have the free will to not react to it. I believe there is no free will.
>>
>> I will explain what I means. Assume that pain is a constraint. But this
>> constraint is not absolute. Other
>> thoughts can override the constraint. For example, when you help some
>> animal being eaten from a monster, you
>> can fight with the monster to save the
>>
>> animal's life. But you will experience pain in the fight. Therefore pain
>> is not a constraint. Your goal to save the animal's
>> life overrides the pain constraint. (your goal to save the
>>
>> animal's life is also motivated by selfish actions) Therefore, pain is 
>> not
>> a constraint. But if there is no goal that overrides the pain constraint,
>> you will do anything to avoid the pain. We have proven there is no free
>> will--we choose to react or not react to pain is dependent on your goal 
>> or
>> our knowledge. Therefore, implementing pain as a constraint in friendly 
>> AI
>> will not help many lives. Our brains are doing things to get the highest
>> pleasure as possible. We get a chemical addiction to save that animal.
>> That pleasure is more pleasant than avoiding the pain by not fighting. We
>> trust ourselves. We can gamble pain for future pleasure. Therefore, I
>> believe that emotion can be implemented by an ordinary computer. Emotion
>> can be implemented by an algorithm that searches for the highest 
>> pleasure.
>> The algorithm must also has the ability to gamble pain for pleasure (by
>> applying "goals" or knowledge). There is no right or wrong. We kill
>> insects all the time. But we usually do
>> not sympathize with them. This is because that our "religion" says that
>> bugs are not as important as other animals. It's
>> a byproduct of natural selection. We have to hunt animals to survive.
>>
>> Without religion, we would brood over this question: Is it better to save
>> a human by sacrificing 1000 insects
>> or vice versa?
>>
>> Therefore we assume that religion is natural. Religion helps us survive.
>> Some religions help us believe there
>> is afterlife and reincarnation. Because we believe these, we do not fear
>> death. We are not afraid to
>> sacrifice ourselves for others. For example, we will not be afraid to
>> participate in wars and spread our
>> religion. Religion is a virus. Most of the world is religious because of
>> that.
>>
>> Therefore, some religions are dangerous. But religion is essential for 
>> our
>> daily survival. Some religious
>> thoughts are encoded in our genes.
>>
>> It's a process of natural selection. Kin selection and group selection 
>> are
>> examples. Returning to the main question: Is selfishness essential for
>> friendly AI? Selfish is related to laziness. Lazy people do not like to
>> sacrifice hard work for pleasure (or they do not enjoy pleasure). They do
>> not like to sacrifice their energy for pleasure. Contrastingly, AI can 
>> use
>> as much energy as it wants. They do not get tired. Pain is using 
>> "energy".
>> But what about these feelings of people? Friendly AI will get pleasure if
>> it sees the people happy. For example, many people are afraid of AI, even
>> friendly AI. The friendly AI computer will self-destruct so these people
>> will not worry about AI. The AI computer has to maintain at least a 
>> little
>> superiority on oneself to prevent self-destruction. It's
>> a natural instinct.
>>
>> But the last paragraph is contradictory. Will the computer self-destruct
>> to get pleasure? We will guess:
>> selfish friendly AI might not. Unselfish friendly AI might (depends on
>> knowledge and circumstances).
>>
>> This is where religion takes over. If the selfish friendly AI believes in
>> an afterlife, it might self-
>> destruct on some circumstances. The selfish friendly AI might experience
>> pleasure during self- destruction.
>> The selfish friendly AI might otherwise (depending on religion) set a 
>> goal
>> that it will experience pleasure
>> after it is self- destructed.
>>
>> However, the friendly AI will be smart enough to figure out, for example,
>> that there is no such thing as an
>> afterlife and religion. What do we do about it? What do we do when it
>> figures out that all organisms are
>> equally superior?
>>
>> Therefore, I believe that selfish AI might be less "risky" than unselfish
>> AI. Unselfish AI might treat
>> everything equally; it might sacrifice humans to save animals.
>>
>> To choose the "safest" route, we need an AI that behaves like a human. 
>> For
>> example, if humans are motivated
>> by selfish goals, then friendly AI has to be motivated by selfish goals.
>> We need an AI to be taught by a top-
>> down method rather than a bottom-up approach, like humans.
>>
>> How do we make the selfish friendly AI algorithm? We have an obvious
>> requirement: lots of
>>
>> heuristics (like pleasure, pain).
>>
>> It's the same for humans. The heuristics for humans are encoded in our
>> genetic code. Because the human brain computes concurrently, the 
>> algorithm
>> is slower on a computer. But evolution isn't
>> perfect - an optimized algorithm might be much faster. Contrary to the
>> popular opinion, I do not think
>> computer speed is a requirement. Any computer will get anything done. It
>> is just a matter of time.
>>
>> It is basically a brute force algorithm that searches for the highest
>> amount of pleasure. It is like a chess
>> program. And because emotion is vital for real-world tasks and perhaps
>> generalized intelligence; a selfish
>> friendly AI algorithm is essential to construct artificial general
>> intelligence.
>>
>> But to recognize emotion of a person, we sometimes have to pretend we are
>> that person. Theories suggest that
>> "mirror neurons" perform empathy. But computers, and also theories 
>> suggest
>> that autistic people do not have
>> "mirror neurons". We have to find a way to emulate empathy: that is --
>> using the selfish friendly AI
>> algorithm.
>>
>> How do we implement the algorithm? It is a difficult question. There are
>> many ways to implement it.
>>
>> My implementation: Knowledge is stored in a fuzzy "database". The
>> algorithm searches through the
>> entire database every time in response to external (and internal) 
>> stimuli,
>> looking for connections or
>> relations (relating to the stimulus). It recursively searches. Then, it
>> chooses the most pleasurable ("goal")
>> action to be performed (from knowledge stored in the database).
>>
>> I believe that behaviors of this implementation can be easily trained by
>> operant conditioning. The training
>> has to "gamble pain for pleasure". It has to get an immediate reward. But
>> I don't know how to train the implementation for more complex tasks, like
>> arithmetic. Single-digit addition is easy, but how do I generalize it to
>> double digit addition? I find it hard to reduce the two-digit number to
>> two discrete digits and add them. Similarly, autistic people seem to have
>> trouble in this similar area. Pattern recognition might help, but it is
>> too complex.
>>
>> Help me with the algorithm. Thank you
>>
>>
>> __________________________________________________
>> Do You Yahoo!?
>> Tired of spam?  Yahoo! Mail has the best spam protection around
>> http://mail.yahoo.com
>>
>> -----
>> This list is sponsored by AGIRI: http://www.agiri.org/email
>> To unsubscribe or change your options, please go to:
>> http://v2.listbox.com/member/?&;;;
>>
>
>
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;;
>
>
>
>
> __________________________________________________
> Do You Yahoo!?
> Tired of spam?  Yahoo! Mail has the best spam protection around
> http://mail.yahoo.com
>
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;;
> 


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;




__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to