AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-27 Thread Dr. Matthias Heger
This link of scientific news of today shows that scientists and
mathematicians obviously have common abilities:

 

 http://www.sciencedaily.com/releases/2008/10/081027121515.htm
http://www.sciencedaily.com/releases/2008/10/081027121515.htm

 

Von: Dr. Matthias Heger [mailto:[EMAIL PROTECTED] 
Gesendet: Montag, 27. Oktober 2008 21:30
An: agi@v2.listbox.com
Betreff: AW: [agi] If your AGI can't learn to play chess it is no AGI

 

Scientists and mathematicians must both solve a same problem:

 

They have to find and make conclusions from available explicit knowledge to
make implicit knowledge explicit.

This ability is a key for creativity and logical reasoning.

 

Thus, science and mathematics are not so different as it seems to be the
case.

 

-Matthias

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-26 Thread Dr. Matthias Heger

I think, most patterns we see in chess (and other domains) are unconscious
patterns
But are strongly related to the visual perception.

Probably good chess players consider thousands of patterns but they are not
aware of this ability.

http://www.psychology.gatech.edu/create/pubs/reingoldcharness_perception-in
-chess_2005_underwood.pdf

-Matthias


-Ursprüngliche Nachricht-
Von: Charles Hixson [mailto:[EMAIL PROTECTED] 
Gesendet: Samstag, 25. Oktober 2008 22:25
An: agi@v2.listbox.com
Betreff: Re: AW: [agi] If your AGI can't learn to play chess it is no AGI

Dr. Matthias Heger wrote:
 ...

 I think humans represent chess by a huge number of **visual** 
 patterns. The chessboard is 8x8 squares. Probably, a human considers 
 all 2x2, 3x3 4x4 and even more subsets of the chessboard at once 
 beside the possible moves. We see if a pawn is alone or if a knight is 
 at the edge of the board. We see if the pawns are in a diagonal and 
 much more. I would guess that the human brain observes many thousands 
 of visual patterns in a single position.

 This is the only explanation for me why the best chess players still 
 have a little chance to win against computers.

  

 E...

 -Matthias

Visual is not exactly correct, at least not for the single moderately 
skilled player that I can internally observe.  The patterns exist, and 
they are spatially represented, but as visual they are definitely 
cartoonish with accompanying annotations (see also this possibility, 
etc.).  Actually I don't see the pieces, but only their directions of 
movement and capture, and if I particularly attend to one section, I 
will hear the name of the piece before a cartoon representation of it 
becomes conscious.  This abbreviated representation allows me to 
consider many more position changes than would a more explicit imagery.  
Actually each position represents the entire board centered around a 
piece of interest, but parts of the board of less relevance to the 
actions being considered are fuzzed out, so that the same concepts can 
be used with many different board positions.  And for any one position, 
when considering moving a piece a particular image of this kind 
appears, and many will be scanned when contemplating a move.

Now I'm not a master, or even a rated player.  But I suspect that this 
kind of thing is also used by such people, only that it becomes so 
practiced that it becomes invisible.  Even as it is I'm frequently not 
even aware of evaluating very poor moves except when the game is towards 
the end, and my position is declining.  Then I tend to evaluate more 
consciously (probably just considering each move more extensively).  And 
I still barely consider pieces that, e.g., I can't move.  During the 
middle game I'm occasionally aware of considering them in a hypothetical 
manner (Well, if I could get that pawn out of the way, then...), but I 
don't notice that happening as much during the end game.  Probably such 
moves have already been considered and discarded.

Note that the image isn't a rectangular pattern.  What's contained 
within it is based on relevance, and it isn't exactly visual, merely 
spatial.  A bishop's move is seen as a sweeping diagonal, probably a 
vectoral representation.  And adjacent squares aren't considered (aren't 
parts of the image) unless I'm considering stopping the bishop on one of 
those squares (at which point anything that could threaten it becomes 
relevant) ... but it isn't a part of the original image.  The bishop's 
move just extends until blocked.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-26 Thread Dr. Matthias Heger
 

Learning is gaining knowledge. This ability does not imply the ability to
*use* the knowledge.

You can learn easily the mathematical axioms of numbers. Within these axioms
there is everything to know about the numbers. 

 

But a lot of people who had this knowledge could not prove Fermat's last
theorem:

 

 http://en.wikipedia.org/wiki/Fermat's_Last_Theorem
http://en.wikipedia.org/wiki/Fermat's_Last_Theorem

 

Learning knowledge is very different from using knowledge. Obviously the
latter one can be much more difficult than the first one.

 

-  Matthias

 

 

What does science include that learning does not?  Please be specific or you
*should* be ignored.

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-24 Thread Dr. Matthias Heger

No Mike. AGI must be able to discover regularities of all kind in all
domains.
If you can find a single domain where your AGI fails, it is no AGI.

Chess is broad and narrow at the same time.
It is easy programmable and testable and humans can solve problems of this
domain using abilities which are essential for AGI. Thus chess is a good
milestone. 

Of course it is not sufficient for AGI. But before you think about
sufficient features, necessary abilities are good milestones to verify
whether your roadmap towards AGI will not go into a dead-end after a long
way of vague hope, that future embodied experience will solve your problems
which you cannot solve today.

- Matthias



Mike wrote
P.S. Matthias seems to be cheerfully cutting his own throat here. The idea 
of a single domain AGI  or pre-AGI is a contradiction in terms every which 
way - not just in terms of domains/subjects or fields, but also sensory 
domains.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] constructivist issues

2008-10-24 Thread Dr. Matthias Heger
The limitations of Godelian completeness/incompleteness are a subset of the
much stronger limitations of finite automata. 

If you want to build a spaceship to go to mars it is of no practical
relevance to think whether it is theoretically possible to move through
wormholes in the universe.

I think, this comparison is adequate to evaluate the role of Gödel's theorem
for AGI.

- Matthias




Abram Demski [mailto:[EMAIL PROTECTED] wrote


I agree with your point in this context, but I think you also mean to
imply that Godel's incompleteness theorem isn't of any importance for
artificial intelligence, which (probably pretty obviously) I wouldn't
agree with. Godel's incompleteness theorem tells us important
limitations of the logical approach to AI (and, indeed, any approach
that can be implemented on normal computers). It *has* however been
overused and abused throughout the years... which is one reason I
jumped on Mark...

--Abram




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-24 Thread Dr. Matthias Heger
This does not imply that people usually do not use visual patterns to solve
chess.
It only implies that visual patterns are not necessary.

Since I do not know any good blind chess player I would suspect that visual
patterns are better for chess
 then those patterns which are used by blind people.

http://www.psych.utoronto.ca/users/reingold/publications/Reingold_Charness_P
omplun__Stampe_press/


http://www.psychology.gatech.edu/create/pubs/reingoldcharness_perception-in
-chess_2005_underwood.pdf


Von: Trent Waddington [mailto:[EMAIL PROTECTED] wrote

http://www.eyeway.org/inform/sp-chess.htm

Trent




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-24 Thread Dr. Matthias Heger
I agree. So let me use a better definition:

 

If you can find a single finite domain where your AGI fails, it is no AGI.

The rules of chess imply that it is finite.

 

But of course even this is a rather theoretical definition. What does an AGI
help if it needs billions of years with billions of resources to solve a
problem? Nothing.

 

Thus the comparison with human abilities is probably the best way what first
AGI should be able to do.

 

Chess is possible for children. So chess is not so bad as a milestone.

 

- Matthias

 

Von: Ben Goertzel [mailto:[EMAIL PROTECTED] 
Gesendet: Freitag, 24. Oktober 2008 11:03
An: agi@v2.listbox.com
Betreff: Re: [agi] If your AGI can't learn to play chess it is no AGI

 

 

On Fri, Oct 24, 2008 at 4:09 AM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:


No Mike. AGI must be able to discover regularities of all kind in all
domains.
If you can find a single domain where your AGI fails, it is no AGI.


According to this definition **no finite computational system can be an
AGI**,
so this is definition obviously overly strong for any practical purposes

E.g. according to this, AIXI (with infinite computational power) but not
AIXItl
would have general intelligence, because the latter can only find
regularities
expressible using programs of length bounded by l and runtime bounded
by t

Unfortunately, the pragmatic notion of AGI we need to use as researchers is
not as simple as the above ... but fortunately, it's more achievable ;-)

One could view the pragmatic task of AGI as being able to discover all
regularities
expressible as programs with length bounded by l and runtime bounded by t
...
[and one can add a restriction about the resources used to make this
discover], but the thing is, this depends highly on the underlying
computational model,
which then can be used to encode some significant domain bias.

-- Ben G
 

 

  _  


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?;
7 Modify Your Subscription

 http://www.listbox.com 

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-24 Thread Dr. Matthias Heger
I do not reply to   the details of your posting because I think 

a)  You mystify AGI

b)  You evaluate the ability to discover regularities completely wrong

c)   The details may be interesting but are not relevant for the subject
of this thread

 

Just imagine you have build an AGI with all the features you wanted to have.

If you confront this AGI with chess, it should be able to learn it and to
play well, doesn't it?

 

-  Matthias

 

 

Von: Mike Tintner [mailto:[EMAIL PROTECTED] 
Gesendet: Freitag, 24. Oktober 2008 13:01
An: agi@v2.listbox.com
Betreff: Re: [agi] If your AGI can't learn to play chess it is no AGI

 

Matthias: AGI must be able to discover regularities of all kind in all
domains.
If you can find a single domain where your AGI fails, it is no AGI.

 

General Intelligence is the ability to cross over from one domain into
*another*  - to a) independently learn new, additional domains and b) to
make *new* connections between domains (as in analogy, metaphor, creativity
etc. ) . (Another way of describing this is general adaptivity.)

 

General Intelligence is about discovering IRREGULAR connections. It has
absolutely nothing whatsoever to do with regularities or single
domains.There is nothing regular about any analogy or metaphor or creative
discovery or example of resourcefulness - for example, seeing an atom as a
solar system, or a heart as a pump, or a benzene molecule as a snake, or
indeed seeing a chess-piece as a castle or a king/queen, or a chessboard
as a battlefield.  There is no regular/logical/mathematical connection
involved whatsoever - such connections are the ANTITHESIS of creativity and
general intelligence, which demand *rule-breaking* not  rule-following.

 

Creating an AGI is a *creative* challenge - in talking about chess, you seem
hellbent   on avoiding that challenge - and avoiding discussing it..I think
we need a term for such behaviour which characterises the vast majority of
people in AGI -  they are creatively challenged.

 

P.S. Perhaps it would be helpful if we compiled a booklist on the subject of
*fear of creativity*. The first one I think of offhand is Conceptual
Blockbusting. You might also look at De Bono and his use of the word Po to
overcome such fears. Anyone think of any more?

 

 

 

 

 

 

 

 

  _  


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?;
7 Modify Your Subscription

 http://www.listbox.com 

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-24 Thread Dr. Matthias Heger
Mark Waser wrote

Must it be able to *discover* regularities or must it be able to be taught 
and subsequently effectively use regularities?  I would argue the latter. 
(Can we get a show of hands of those who believe the former?  I think that 
it's a small minority but . . . )


If AGI means the ability to solve a general set of problems in a general set
of domains and if AGI shall compete with human intelligence then
*definitely* AGI must be able to *discover* regularities alone without a
teacher. 


Failure is an interesting evaluation.  Ben's made it quite clear that 
advanced science is a domain that stupid (if not non-exceptional) humans 
fail at.  Does that mean that most humans aren't general intelligences?


Intelligence is not a binary value. And general intelligence is not binary
as well.
I am sure that most people have limitations that they can never be a
professional scientist. It is similar to running. Nearly everyone can run
but only a few people have a chance to run the marathon under 2:30h
If you define the ability to be a scientist as a necessary condition for
AGI, then indeed, I would say most people aren't general intelligences.


Chess is a good milestone because of it's very difficulty.  The reason why 
human's learn chess so easily (and that is a relative term) is because they 
already have an excellent spatial domain model in place, a ton of strategy 
knowledge available from other learned domains, and the immense array of 
mental tools that we're going to need to bootstrap an AI.  Chess as a GI 
task (or, via a GI approach) is emphatically NOT easily programmable.


Chess as an *environment* to test AGI is easily programmable.
I suggest that strategy knowledge from other domains is of no relevance.
It is mainly visual knowledge. If you would give a human a binary string
representation of the chessboard and the positions of the chess pieces the
human would have huge problem to recognize which moves are possible. The
quality of his chess depends obviously on the visual representation of the
chess situation.

- Matthias



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] constructivist issues

2008-10-24 Thread Dr. Matthias Heger
Mark Waser wrote:

Can we get a listing of what you believe these limitations are and whether 
or not you believe that they apply to humans?

I believe that humans are constrained by *all* the limits of finite automata

yet are general intelligences so I'm not sure of your point.


It is also my opinion that humans are constrained by *all* the limits of
finite automata.
But I do not agree that most humans can be scientists. If this is necessary
for general intelligence then most humans are not general intelligences.

It depends on your definition of general intelligence.

Surely there are rules (=algorithms) to be a scientist. If not, AGI would
not be possible and there would not be any scientist at all. 

But you cannot separate the rules (algorithm) from the evaluation whether a
human or a machine is intelligent. Intelligence comes essentially from these
rules and from a lot of data. 

The mere ability to use arbitrary rules does not imply general intelligence.
Your computer has this ability but without the rules it is not intelligent
at all.

- Matthias





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-24 Thread Dr. Matthias Heger
Mark Waser wrote

Must it be able to *discover* regularities or must it be able to be taught
and subsequently effectively use regularities?  I would argue the latter. 
(Can we get a show of hands of those who believe the former?  I think that
it's a small minority but . . . ) 

If AGI means the ability to solve a general set of problems in a general set
of domains and if AGI shall compete with human intelligence then
*definitely* AGI must be able to *discover* regularities alone without a
teacher. 


Failure is an interesting evaluation.  Ben's made it quite clear that
advanced science is a domain that stupid (if not non-exceptional) humans
fail at.  Does that mean that most humans aren't general intelligences?


Intelligence is not a binary value. And general intelligence is not binary
as well.
I am sure that most people have limitations that they can never be a
professional scientist. It is similar to running. Nearly everyone can run
but only a few people have a chance to run the marathon under 2:30h If you
define the ability to be a scientist as a necessary condition for AGI, then
indeed, I would say most people aren't general intelligences.


Chess is a good milestone because of it's very difficulty.  The reason why
human's learn chess so easily (and that is a relative term) is because they
already have an excellent spatial domain model in place, a ton of strategy
knowledge available from other learned domains, and the immense array of
mental tools that we're going to need to bootstrap an AI.  Chess as a GI
task (or, via a GI approach) is emphatically NOT easily programmable.


Chess as an *environment* to test AGI is easily programmable.
I suggest that strategy knowledge from other domains is of no relevance.
It is mainly visual knowledge. If you would give a human a binary string
representation of the chessboard and the positions of the chess pieces the
human would have huge problem to recognize which moves are possible. The
quality of his chess depends obviously on the visual representation of the
chess situation.

- Matthias



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-23 Thread Dr. Matthias Heger
The goal of chess is well defined: Avoid being checkmate and try to
checkmate your opponent.

What checkmate means can be specified formally.

Humans mainly learn chess from playing chess. Obviously their knowledge
about other domains are not sufficient for most beginners to be a good chess
player at once. This can be proven empirically.

Thus an AGI would not learn chess completely different from all what we now.
It would learn from experience which is one of  the most common kinds of
learning.

I am sure that everyone who learns chess by playing against chess computers
and is able to learn good chess playing (which is not sure as also not
everyone can learn to be a good mathematician) will be able to be a good
chess player against humans.

My first posting in this thread shows the very weak point in the
argumentation of those people who say that social and other experiences are
needed to play chess.

You suppose knowledge must be available from another domain to solve
problems of the domain of chess.
But everything of chess in on the chessboard itself. If you are not able to
solve chess problems from chess alone then you are not able to solve certain
solvable problems. And thus you cannot call your AI AGI.

If you give an AGI all facts which are sufficient to solve a problem then
your AGI must be able to solve the problem using nothing else than these
facts.

If you do not agree with this, then how should an AGI know which experiences
in which other domains are necessary to solve the problem? 

The magic you use is the overestimation of real-world experiences. It sounds
as if the ability to solve arbitrary problems in arbitrary domains depend
essentially on that your AGI plays in virtual gardens and speaks often with
other people. But this is completely nonsense. No one can play good chess by
those experiences. Thus such experiences are not sufficient. On the other
hand there are programs which definitely do not have such experiences and
outperform humans in chess. Thus those experiences are neither sufficient
nor necessary to play good chess and emphasizing on such experiences is
mystifying AGI, similar as it is done by the doubters of AGI who always
argue with Goedel or quantum physics which in fact has no relevance for
practical AGI at all.

- Matthias





Trent Waddington [mailto:[EMAIL PROTECTED] wrote

Gesendet: Donnerstag, 23. Oktober 2008 07:42
An: agi@v2.listbox.com
Betreff: Re: [agi] If your AGI can't learn to play chess it is no AGI

On Thu, Oct 23, 2008 at 3:19 PM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
 I do not think that it is essential for the quality of my chess who had
 taught me to play chess.
 I could have learned the rules from a book alone.
 Of course these rules are written in a language. But this is not important
 for the quality of my chess.

 If a system is in state x then it is not essential for the future how x
was
 generated.
 Thus a programmer can hardcode the rules of chess in his AGI and then,
 concerning chess the AGI would be in the same state as if someone teaches
 the AGI the chess rules via language.

 The social aspect of learning chess is of no relevance.

Sigh.

Ok, let's say I grant you the stipulation that you can hard code the
rules of chess some how.  My next question is, in a goal-based AGI
system, what goal are you going to set and how are you going to set
it?  You've ruled out language, so you're going to have to hard code
the goal too, so excuse my use of language:

Play good chess

Oh.. that sounds implementable.  Maybe you'll give it a copy of
GNUChess and let it go at it.. but I've known *humans* who learnt to
play chess that way and they get trounced by the first human they play
against.  How are you going to go about making an AGI that can learn
chess in a complete different way to all the known ways of learning
chess?

Or is the AGI supposed to figure that out?

I don't understand why so many of the people on this list seem to
think AGI = magic.

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: AW: [agi] Understanding and Problem Solving

2008-10-23 Thread Dr. Matthias Heger
Natural language understanding is a problem. And a system with the ability
to understand natural language is obviously able to solve *this* problem.

But the ability to talk about certain domains does not imply the ability to
solve the problems  in  this domain.

I have argued this point with my example of the two programs for the domain
of graphs.

 

As Ben has said, it essentially depends on definitions. Probably, you have a
different understanding of the meaning of understanding ;-)

But for me there is a difference between understanding a domain and the
ability to solve problems in a domain.

 

I can understand a car  but this does not imply that I can drive a car.

I can understand a proof but this does not imply that I can create it.

My computer understands my programs because it executes every step correctly
but it cannot create a single statement in the language it understands.

 

Did you never experienced a situation where you could not solve a problem
but when another person has shown you the solution you understood it at
once?

You could not create it but you needed not to learn to understand it.

Of course, often when you see a solution for a problem then you learn to
solve it at the same time. But this is exactly the reason why you have the
illusion that understanding and problem solving are the same.

 

Think about a very difficult proof. You can understand every step. But when
you get just an empty piece of paper to write it down again then you cannot
remember the whole proof and thus you cannot create it. But you can
understand it, if you read it. Obviously there is a difference between
understanding and problem solving.

.

 

I am sure, you want to define understanding differently. But I do not
agree because then the term understanding would be overloaded and too much
mystified.

And we already have too much terms which are unnecessarily mystified in AI.

 

- Matthias

 

 

Terren Suydam [mailto:[EMAIL PROTECTED] wrote





Matthias, 

I say understanding natural language requires the ability to solve problems.
Do you disagree?  If so, then you must have an explanation for how an AI
that could understand language would be able to understand novel metaphors
or analogies without doing any active problem-solving. What is your
explanation for that?

If on the other hand you agree that NLU entails problem-solving, then that
is a start. From there we can argue whether the problem-solving abilities
necessary for NLU are sufficient to allow problem-solving to occur in any
domain (as I have argued). 

Terren

--- On Thu, 10/23/08, Dr. Matthias Heger [EMAIL PROTECTED] wrote:

From: Dr. Matthias Heger [EMAIL PROTECTED]
Subject: AW: [agi] Understanding and Problem Solving
To: agi@v2.listbox.com
Date: Thursday, October 23, 2008, 10:12 AM

I do not agree. Understanding a domain does not imply the ability to solve
problems in that domain.

And the ability to solve problems in a domain even does not imply to have a
generally a deeper understanding of that domain.

 

Once again my example of the problem to find a path within a graph from node
A to node B:

Program p1 (= problem solver) can find a path.

Program p2  (= expert in understanding) can verify and analyze paths.

 

For instance, p2 could be able compare the length of the path for the first
half of the nodes with the length of the path for the second half of the
nodes. It is not necessary that  P1 can do this as well.

 

P2 can not necessarily find a path. But p1 can not necessarily analyze its
solution.

 

Understanding  and problem solving are different things which might have a
common subset but it is wrong that the one implies the other one or vice
versa.

 

And that's the main reason why natural language understanding is not
necessarily AGI-complete.

 

-Matthias

 

 

Terren Suydam [mailto:[EMAIL PROTECTED]  wrote:

 



Once again, there is a depth to understanding - it's not simply a binary
proposition.

Don't you agree that a grandmaster understands chess better than you do,
even if his moves are understandable to you in hindsight?

If I'm not good at math, I might not be able to solve y=3x+4 for x, but I
might understand that y equals 3 times x plus four. My understanding is
superficial compared to someone who can solve for x. 

Finally, don't you agree that understanding natural language requires
solving problems? If not, how would you account for an AI's ability to
understand novel metaphor? 

Terren

--- On Thu, 10/23/08, Dr. Matthias Heger [EMAIL PROTECTED] wrote:

From: Dr. Matthias Heger [EMAIL PROTECTED]
Subject: [agi] Understanding and Problem Solving
To: agi@v2.listbox.com
Date: Thursday, October 23, 2008, 1:47 AM

Terren Suydam wrote:

  

Understanding goes far beyond mere knowledge - understanding *is* the
ability to solve problems. One's understanding of a situation or problem is
only as deep as one's (theoretical) ability to act in such a way as to
achieve a desired outcome. 

  

 

I disagree

AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-23 Thread Dr. Matthias Heger
Just now there is a world championship in chess. My chess programs (e.g.
Fritz 11) can give a ranking for all moves given an arbitrary chess
position.

The program agrees with the grandmasters which moves are in the top 5. In
most situations it even agrees which move is the best one.

Thus, human style chess of top grandmasters and computer chess are quite the
same today.

 

- Matthias

 

 

 

Von: Ben Goertzel [mailto:[EMAIL PROTECTED] 
Gesendet: Freitag, 24. Oktober 2008 00:41
An: agi@v2.listbox.com
Betreff: Re: [agi] If your AGI can't learn to play chess it is no AGI

 

 

On Thu, Oct 23, 2008 at 5:38 PM, Trent Waddington
[EMAIL PROTECTED] wrote:

On Thu, Oct 23, 2008 at 6:11 PM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
 I am sure that everyone who learns chess by playing against chess
computers
 and is able to learn good chess playing (which is not sure as also not

 everyone can learn to be a good mathematician) will be able to be a good
 chess player against humans.

And you're wrong.


Trent



Yes ... at the moment the styles of human and computer chess players are
different enough that doing well against computer players does not imply
doing nearly equally well against human players ... though it certainly
helps a lot ...

ben g 

 

  _  


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?;
7 Modify Your Subscription

 http://www.listbox.com 

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-23 Thread Dr. Matthias Heger
I am very impressed about the performance of humans in chess compared to
computer chess.

The computer steps through millions(!) of positions per second. And even if
the best chess players say they only evaluate max 3 positions per second I
am sure that this cannot be true because there are so many traps in chess
which must be considered.

 

I think humans represent chess by a huge number of *visual* patterns. The
chessboard is 8x8 squares. Probably, a human considers all 2x2, 3x3 4x4 and
even more subsets of the chessboard at once beside the possible moves. We
see if a pawn is alone or if a knight is at the edge of the board. We see if
the pawns are in a diagonal and much more. I would guess that the human
brain observes many thousands of visual patterns in a single position. 

This is the only explanation for me why the best chess players still have a
little chance to win against computers.

 

Even a beginner who never has played chess would see some patterns in the
initial position. All pieces with the same color are together at different
sides. All pawns of the same color are in the same raw and so on. The
interesting question is why the beginner can already see regularities. I
think the human has a lot of visual bias which is also useful to see
patterns in chess. On the other hand visual embodied experience is of course
important too. In my opinion, sophisticated vision is much more important
for an artificial human than natural language understanding

 

-Matthias

 

Von: Ben Goertzel [mailto:[EMAIL PROTECTED] 
Gesendet: Freitag, 24. Oktober 2008 01:53
An: agi@v2.listbox.com
Betreff: Re: [agi] If your AGI can't learn to play chess it is no AGI

 


Yeah, but these programs did not learn to play via playing other computer
players or studying the rules of the game ... they use alpha-beta pruning
combined with heuristic evaluation functions carefully crafted by human
chess experts ... i.e. they are created based on human knowledge about
playing human players...

I do think that a sufficiently clever AGI should be able to learn to play
chess very well based on just studying the rules.  However, it's notable
that **either no, or almost no, humans have ever done this** ... so it would
require a quite high level of intelligence in this domain...

ben g




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread Dr. Matthias Heger
I see no argument in your text against my main argumentation, that an AGI
should be able to learn chess from playing chess alone. This I call straw
man replies.

 

My main point against embodiment is just the huge effort for embodiment. You
could work for years with this approach and  a certain AGI concept until you
recognize that it doesn't work.

 

If you apply your AGI concept in a small and even not necessarily
AGI-complete domain you would come much faster to a benchmark whether your
concept is even worth to make difficult studies with embodiment.

 

Chess is a very good domain for this benchmark because it is very easy to
program and it is very difficult to outperform human intelligence in this
domain.

 

- Matthias

 

 

 

 

Von: David Hart [mailto:[EMAIL PROTECTED] 
Gesendet: Mittwoch, 22. Oktober 2008 09:43
An: agi@v2.listbox.com
Betreff: Re: [agi] If your AGI can't learn to play chess it is no AGI

 

Matthias, 

You've presented a straw man argument to criticize embodiment; As a
counter-example, in the OCP AGI-development plan, embodiment is not
primarily used to provide domains (via artificial environments) in which an
AGI might work out abstract problems, directly or comparatively (not to
discount the potential utility of this approach in many scenarios), but
rather to provide an environment for the grounding of symbols (which include
concepts important for doing mathematics), similar to the way in which
humans (from infants through to adults) learn through play and also through
guided education.

'Abstraction' is so named because it involves generalizing from the
specifics of one or more domains (d1, d2), and is useful when it can be
applied (with *any* degree of success) to other domains (d3, ...). Virtual
embodied interactive learning utilizes virtual objects and their properties
as a way of generating these specifics for artificial minds to use to build
abstractions, to grok the abstractions of others, and ultimately to build a
deep understanding of our reality (yes, 'deep' in this sense is used in a
very human-mind-centric way).

Of course, few people claim that machine learning with the help of virtually
embodied environments is the ONLY way to approach building an AI capable of
doing and mathematics (and communicating with humans about mathematics), but
it is an approach that has *many* good things going for it, including
proving tractable via measurable incremental improvements (even though it is
admittedly still at a *very* early stage).

-dave

On Wed, Oct 22, 2008 at 4:20 PM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:

It seems to me that many people think that embodiment is very important for
AGI.

For instance some people seem to believe that you can't be a good
mathematician if you haven't made some embodied experience.

 

But this would have a rather strange consequence:

If you give your AGI a difficult mathematical problem to solve, then it
would answer:

 

Sorry, I still cannot solve your problem, but let me walk with my body
through the virtual world. 

Hopefully, I will then understand your mathematical question end even more
hopefully I will be able to solve it after some further embodied
experience.

 

AGI is the ability to solve different problems in different domains. But
such an AGI would need to make experiences in domain d1 in order to solve
problems of domain d2. Does this really make sense, if every information
necessary to solve problems of d2 is in d2? I think an AGI which has to make
experiences in d1 in order to solve a problem of domain d2 which contains
everything to solve this problem is no AGI. How should such an AGI know what
experiences in d1 are necessary to solve the problem of d2?

 

In my opinion a real AGI must be able to solve a problem of a domain d
without leaving this domain if in this domain there is everything to solve
this problem.

 

From this we can define a simple benchmark which is not sufficient for AGI
but which is *necessary* for a system to be an AGI system:

 

Within the domain of chess there is everything to know about chess. So if it
comes up to be a good chess player

learning chess from playing chess must be sufficient. Thus, an AGI which is
not able to enhance its abilities in chess from playing chess alone is no
AGI.  

 

Therefore, my first steps in the roadmap towards AGI would be the following:

1.   Make a concept for your architecture of your AGI

2.   Implement the software for your AGI

3.   Try if your AGI is able to become a good chess player from learning
in the domain of chess alone.

4.   If your AGI can't even learn to play good chess then it is no AGI
and it would be a waste of time to make experiences with your system in more
complex domains.

 

-Matthias

 

 

 

 

  _  


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ Fehler! Es wurde kein
Dateiname angegeben.|  https://www.listbox.com/member/?; Modify Your

AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread Dr. Matthias Heger
The restriction is by far not arbitrary. If your AGI is in a space ship or
on a distant planet and has to solve the problems in this domain then it has
no chance to leave this domain.

 

If this domain contains every information which is necessary to solve the
problem then an AGI *must* be able to solve this problem without leaving
this domain. Otherwise it would have an essential lack of intelligence and
it would not be a real AGI.

 

By the way:

Generalization is a mythical thing, because you can never make conclusions
from past visited state-action pairs to still unvisited state-action pairs.
The reason why this often works are just regularities in the environment.
But of course you can not presume that these regularities hold for arbitrary
domains. The only thing you can do is to use your past experiences and
*hope* they will apply in still unknown domains.

 

- Matthias

 

 

Von: David Hart [mailto:[EMAIL PROTECTED] 
Gesendet: Mittwoch, 22. Oktober 2008 11:27
An: agi@v2.listbox.com
Betreff: Re: [agi] If your AGI can't learn to play chess it is no AGI

 

I see no reason to impose on AGI the arbitrary restriction that it need
posses the ability to learn to perform in a given domain by learning from
only within that domain. An AGI should be able to, by definition, adapt
itself to function across different and varied domains, using its
multi-domain knowledge and experience to improve its performance in any
single domain. Choosing a performance metric from only a single domain as a
benchmark for an AGI is antithetical to this definition, because, e.g.,
software that can perform well at chess without being adaptable to other
domains is not AGI, but merely narrow AI, and such simplistic single-domain
benchmarks can be easily tricked by collections of well orchestrated narrow
AI programs. Rather, good benchmarks should be composite benchmarks with
component sub-benchmarks spanning multiple and varied domains.

A human analogue of the multi-domain AGI concept is nicely paraphrased by
Robert A. Heinlein: A human being should be able to change a diaper, plan
an invasion, butcher a hog, conn a ship, design a building, write a sonnet,
balance accounts, build a wall, set a bone, comfort the dying, take orders,
give orders, cooperate, act alone, solve equations, analyze a new problem,
pitch manure, program a computer, cook a tasty meal, fight efficiently, die
gallantly. Specialization is for insects. 

-dave



On Wed, Oct 22, 2008 at 7:23 PM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:

I see no argument in your text against my main argumentation, that an AGI
should be able to learn chess from playing chess alone. This I call straw
man replies.

 

My main point against embodiment is just the huge effort for embodiment. You
could work for years with this approach and  a certain AGI concept until you
recognize that it doesn't work.

 

If you apply your AGI concept in a small and even not necessarily
AGI-complete domain you would come much faster to a benchmark whether your
concept is even worth to make difficult studies with embodiment.

 

Chess is a very good domain for this benchmark because it is very easy to
program and it is very difficult to outperform human intelligence in this
domain.

 

- Matthias

 

 

 

 

Von: David Hart [mailto:[EMAIL PROTECTED] 
Gesendet: Mittwoch, 22. Oktober 2008 09:43
An: agi@v2.listbox.com
Betreff: Re: [agi] If your AGI can't learn to play chess it is no AGI

 

Matthias, 

You've presented a straw man argument to criticize embodiment; As a
counter-example, in the OCP AGI-development plan, embodiment is not
primarily used to provide domains (via artificial environments) in which an
AGI might work out abstract problems, directly or comparatively (not to
discount the potential utility of this approach in many scenarios), but
rather to provide an environment for the grounding of symbols (which include
concepts important for doing mathematics), similar to the way in which
humans (from infants through to adults) learn through play and also through
guided education.

'Abstraction' is so named because it involves generalizing from the
specifics of one or more domains (d1, d2), and is useful when it can be
applied (with *any* degree of success) to other domains (d3, ...). Virtual
embodied interactive learning utilizes virtual objects and their properties
as a way of generating these specifics for artificial minds to use to build
abstractions, to grok the abstractions of others, and ultimately to build a
deep understanding of our reality (yes, 'deep' in this sense is used in a
very human-mind-centric way).

Of course, few people claim that machine learning with the help of virtually
embodied environments is the ONLY way to approach building an AI capable of
doing and mathematics (and communicating with humans about mathematics), but
it is an approach that has *many* good things going for it, including
proving tractable via measurable incremental improvements (even though

AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread Dr. Matthias Heger
I do not claim that AGI might not have bias which is equivalent to genes of
your example. The point is that AGI is the union set of all AI sets. If I
have a certain domain d and a problem p and I know that p can be solved
using nothing else than d, then AGI must be able to solve problem p in d
otherwise it is not AGI.

- Matthias

Bob Mottram wrote


In the case of humans embodied experience also includes the
experience accumulated by our genes over many generations of
evolutionary time.  This means that even if you personally have not
had much embodied experience during your lifetime evolution has shaped
your brain wiring ready for that sort of cognition to take place (for
instance the ability to perform mental rotations).




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread Dr. Matthias Heger
If you give the system the rules of chess then it has all which is necessary
to know to become a good chess player.
It may play against itself or against a common chess program or against
humans.


- Matthias


Trent Waddington [mailto:[EMAIL PROTECTED] wrote


No-one can learn chess from playing chess alone.

Chess is necessarily a social activity.

As such, your suggestion isn't even sensible, let alone reasonable.

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread Dr. Matthias Heger
I do not regard chess as important as a drosophila for AI. It would just be
a first milestone where we can make a fast proof of concept for an AGI
approach. The faster we can sort out bad AGI approaches the sooner we will
obtain a successful one.

 

Chess has the advantage to be an easy programmable domain.

The domain of chess is not AGI-complete but crucial problems for AGI can be
found in chess as well.

AGI can be trained automatically against strong chess programs because those
engines offer an open API.

The performance can be evaluated by elo-ranking, i.e. a common evaluation
algorithm for chess players

 

But I do not emphasize performance evaluation too much. The milestone would
be passed successfully if the AGI would use a current PC and would be able
to beat average human chess players after it has played many thousand chess
games against chess programs.

 

It would be a big step towards AGI if someone could build a chess playing
program by a learning software which is pattern based and is not inherently
build for chess.

 

I think such a program would gain much attention in the community of AI
which is also necessary to accelerate the research of AGI.

Of course successful experiments with embodiment would probably gain more
attention. But the development cycle from concept to experiment would take
much longer time with embodiment than with an easy to program and
automatically testable chess domain. 

 

We should suspect that we still have to go many times through this cycle and
therefore it is essential that the cycle should need as few efforts and time
as possible.

 

- Matthias

 

 

Vladimir Nesov [mailto:[EMAIL PROTECTED] wrote



Current AIs learn chess without engaging in social activities ;-).

And chess might be a good drosophila for AI, if it's treated as such (

http://www-formal.stanford.edu/jmc/chess.html ).

This was uncalled for.

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread Dr. Matthias Heger
I agree that chess is far from sufficient for AGI. But I have mentioned this
already at the beginning of this thread.

The important role of chess for AGI could be to rule out bad AGI approaches
as fast as possible.

 

Before you go to more complex domains you should consider chess as a first
important milestone which helps you not to go a long way towards a dead end
with the wrong approach for AGI.

 

If chess is so easy because it is completely described, complete information
about state available, fully deterministic etc. then the more important it
is that your AGI can learn such an easy task before you try something more
difficult.

 

 

-Matthias

 

 

 Derek Zahn [mailto:[EMAIL PROTECTED] wrote



I would agree with this and also with your thesis that a true AGI must be
able to learn chess in this way.  However, although this ability is
necessary it is far from sufficient for AGI, and thinking about AGI from
this very narrow perspective seems to me to be a poor way to attack the
problem.  Very few of the things an AGI must be able to do (as the Heinlein
quote points out) are similar to chess -- completely described, complete
information about state available, fully deterministic.  If you aim at chess
you might hit chess but there's no reason that you will achieve anything
higher.
 
Still, using chess as a test case may not be useless; a system that produces
a convincing story about concept formation in the chess domain (that is,
that invents concepts for pinning, pawn chains, speculative sacrifices in
exchange for piece mobility, zugzwang, and so on without an identifiable
bias toward these things) would at least be interesting to those interested
in AGI.
 
Mathematics, though, is interesting in other ways.  I don't believe that
much of mathematics involves the logical transformations performed in proof
steps.  A system that invents new fields of mathematics, new terms, new
mathematical ideas -- that is truly interesting.  Inference control is
boring, but inventing mathematical induction, complex numbers, or ring
theory -- THAT is AGI-worthy.
 

  _  


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?;
7 Modify Your Subscription

 http://www.listbox.com 

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] A huge amount of math now in standard first-order predicate logic format!

2008-10-22 Thread Dr. Matthias Heger
Very useful link. Thanks.

 

-Matthias

 

Von: Ben Goertzel [mailto:[EMAIL PROTECTED] 
Gesendet: Mittwoch, 22. Oktober 2008 15:40
An: agi@v2.listbox.com
Betreff: [agi] A huge amount of math now in standard first-order predicate
logic format!

 


I had not noticed this before, though it was posted earlier this year.

Finally Josef Urban translated Mizar into a standard first-order logic
format:

http://www.cs.miami.edu/~tptp/MizarTPTP/
http://www.cs.miami.edu/%7Etptp/MizarTPTP/ 

Note that there are hyperlinks pointing to the TPTP-ized proofs of each
theorem.

This is math with **no steps left out of the proofs** ... everything
included ...

This should be a great resource for AI systems that want to learn about math
by reading definitions/theorems/proofs without needing to grok English
language or diagrams...

Translating this TPTP format into something easily loadable into OpenCog,
for
example, would not be a big trick

Doing useful inference on the data, on the other hand, is another story ;-)

To try this in OpenCog, we gotta wait for Joel to finish porting the
backward-chainer
from NM to OpenCog ... and then, dealing with all this data would be a
mighty test
of adaptive inference control ;-O

ben g




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

A human being should be able to change a diaper, plan an invasion, butcher
a hog, conn a ship, design a building, write a sonnet, balance accounts,
build a wall, set a bone, comfort the dying, take orders, give orders,
cooperate, act alone, solve equations, analyze a new problem, pitch manure,
program a computer, cook a tasty meal, fight efficiently, die gallantly.
Specialization is for insects.  -- Robert Heinlein



  _  


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?;
7 Modify Your Subscription

 http://www.listbox.com 

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread Dr. Matthias Heger
Ben wrote:

The ability to cope with narrow, closed, deterministic environments in an
isolated way is VERY DIFFERENT from the ability to cope with a more
open-ended, indeterminate environment like the one humans live in

 

These narrow, closed, deterministic domains are *subsets* of what AGI is
intended to do and what humans can do. Chess can be learned by young
children.  



Not everything that is a necessary capability of a completed human-level,
roughly human-like AGI, is a sensible first step toward a human-level,
roughly human-like AGI

 

This is surely true.  But let's say someone wants to develop a car. Doesn't
it makes sense first to develop and test its essential parts before I put
everything together and go to the road? 

I think chess is a good testing area because in the domain of chess there
are too many situations to consider them all. This is a very typical and
very important problem of human environments as well. On the other hand
there are patterns in chess which can be learned and which makes life less
complex. This is the second analogy to human environments. Therefore the
domain of chess is not so different. It contains an important subset of
typical problems for human-level AI.

And if you want to solve the complex problem to build AGI then you cannot
avoid the task of solving every single of its sub problems. 

If your system sees no patterns in chess, then I would doubt whether it is
really suitable for AGI.

 


I'm not saying that making a system that's able to learn chess is a **bad**
idea.   I am saying that I suspect it's not the best path to AGI.

 

Ok.




I'm slightly more attracted to the General Gameplaying (GGP) Competition
than to a narrow-focus on chess

 http://games.stanford.edu/ http://games.stanford.edu/

but not so much to that either...

I look at it this way.  I have a basic understanding of how a roughly
human-like AGI mind (with virtual embodiment and language facility) might
progress from the preschool level up through the university level, by
analogy to human cognitive development.

On the other hand, I do not have a very good understanding at all of how a
radically non-human-like AGI mind would progress from learn to play chess
level to the university level, or to the level of GGP, or robust
mathematical theorem-proving, etc.  If you have a good understanding of this
I'd love to hear it.

 

Ok. I do not say that your approach is wrong. In fact I think it is very
interesting and ambitious. But as you think that my approach is not the best
one I think that your approach is not the best one.  Probably, the
discussion could be endless. And probably you already have invested too much
effort in your approach that you really can consider to change it. I hope
you are right because I would be very happy to see the first AGI soon,
regardless who will build it and regardless which concept is used.

-Matthias








---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: AW: [agi] Language learning (was Re: Defining AGI)

2008-10-22 Thread Dr. Matthias Heger
You make the implicit assumption that a natural language understanding
system will pass the turing test. Can you prove this?

Furthermore,  it is just an assumption that the ability to have and to apply
the rules are really necessary to pass the turing test.

For these two reasons, you still haven't shown 3a and 3b.

By the way:
The turing test must convince 30% of the people.
Today there is a system which can already convince 25%

http://www.sciencedaily.com/releases/2008/10/081013112148.htm

-Matthias


 3) you apply rules such as 5 * 7 = 35 - 35 / 7 = 5 but
 you have not shown that
 3a) that a language understanding system necessarily(!) has
 this rules
 3b) that a language understanding system necessarily(!) can
 apply such rules

It must have the rules and apply them to pass the Turing test.

-- Matt Mahoney, [EMAIL PROTECTED]





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: AW: [agi] If your AGI can't learn to play chess it is no AGI

2008-10-22 Thread Dr. Matthias Heger
 

 

It depends what to play chess poorly mean. No one would expect that a
general AGI architecture can outperform special chess programs with the same
computational resources. I think you could convince a lot of people if you
demonstrate that your approach which is obviously completely different from
brute force chess can learn chess to a moderate level of a let's say average
10 year old human chess player.

 

At least when you are in your open cog roadmap  between phase artificial
child and artificial adult  then your system should necessarily be able
to learn chess without any special hacking of hidden chess knowledge. 

 

BTW, Computer GO is already not so bad:

 

 
http://www.engadget.com/2008/08/15/supercomputer-huygens-beats-go-professio
nal-no-one-is-safe/
http://www.engadget.com/2008/08/15/supercomputer-huygens-beats-go-profession
al-no-one-is-safe/

 

- Matthias

 

Ben wrote:


I strongly suspect that OpenCog ... once more of the NM tools are ported to
it (e.g. the completion of the backward chainer port) ... could learn to
play chess legally but not very well.   To get it to play really well would
probably require either a lot of specialized hacking with inference control,
or a broader AGI approach going beyond the chess domain... or a lot more
advancement of the learning mechanisms (along lines already specified in the
OCP design)  To me, teaching OpenCog to play chess poorly would prove
almost nothing.  And getting it to play chess well via tailoring the
inference control mechanisms would prove little that's relevant to AGI,
though it would be cool.

 

Ok. I do not say that your approach is wrong. In fact I think it is very
interesting and ambitious. But as you think that my approach is not the best
one I think that your approach is not the best one.  Probably, the
discussion could be endless. And probably you already have invested too much
effort in your approach that you really can consider to change it. I hope
you are right because I would be very happy to see the first AGI soon,
regardless who will build it and regardless which concept is used.

I would change my approach if I thought there were a better one.  But you
haven't convinced me, just as I haven't convinced you ;-)

Anyway, to take your approach I would not need to change my AGI design at
all: OCP could be pursued in the domain of learning to play chess.  I just
don't think that's the best choice.

BTW, if I were going to pursue a board game I'd choose Go not chess ... at
least it hasn't been solved by narrow-AI very well yet ... so a really good
OpenCog-based Go program would have more sex appeal ... there has not been a
Deep Blue of Go

My son is a good Go player so maybe I'll talk him into trying this one day
;-)

ben g
 

 

  _  


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?;
7 Modify Your Subscription

 http://www.listbox.com 

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] Language learning (was Re: Defining AGI)

2008-10-21 Thread Dr. Matthias Heger
Sorry, but this was no proof that a natural language understanding system is
necessarily able to solve the equation x*3 = y for arbitrary y.

1) You have not shown that a language understanding system must
necessarily(!) have made statistical experiences on the equation x*3 =y.

2) you give only a few examples. For a proof of the claim, you have to prove
it for every(!) y.

3) you apply rules such as 5 * 7 = 35 - 35 / 7 = 5 but you have not shown
that
3a) that a language understanding system necessarily(!) has this rules
3b) that a language understanding system necessarily(!) can apply such rules

In my opinion a natural language understanding system must have a lot of
linguistic knowledge.
Furthermore a system which can learn natural languages must be able to gain
linguistic knowledge.

But both systems do not have necessarily(!) the ability to *work* with this
knowledge as it is essential for AGI.

And for this reason natural language understanding is not AGI complete at
all.

-Matthias



-Ursprüngliche Nachricht-
Von: Matt Mahoney [mailto:[EMAIL PROTECTED] 
Gesendet: Dienstag, 21. Oktober 2008 05:05
An: agi@v2.listbox.com
Betreff: [agi] Language learning (was Re: Defining AGI)


--- On Mon, 10/20/08, Dr. Matthias Heger [EMAIL PROTECTED] wrote:

 For instance, I doubt that anyone can prove that
 any system which understands natural language is
 necessarily able to solve
 the simple equation x *3 = y for a given y.

It can be solved with statistics. Take y = 12 and count Google hits:

string count
-- -
1x3=12 760
2x3=12 2030
3x3=12 9190
4x3=12 16200
5x3=12 1540
6x3=12 1010

More generally, people learn algebra and higher mathematics by induction, by
generalizing from lots of examples.

5 * 7 = 35 - 35 / 7 = 5
4 * 6 = 24 - 24 / 6 = 4
etc...
a * b = c - c = b / a

It is the same way we learn grammatical rules, for example converting active
to passive voice and applying it to novel sentences:

Bob kissed Alice - Alice was kissed by Bob.
I ate dinner - Dinner was eaten by me.
etc...
SUBJ VERB OBJ - OBJ was VERB by SUBJ.

In a similar manner, we can learn to solve problems using logical deduction:

All frogs are green. Kermit is a frog. Therefore Kermit is green.
All fish live in water. A shark is a fish. Therefore sharks live in water.
etc...

I understand the objection to learning math and logic in a language model
instead of coding the rules directly. It is horribly inefficient. I estimate
that a neural language model with 10^9 connections would need up to 10^18
operations to learn simple arithmetic like 2+2=4 well enough to get it right
90% of the time. But I don't know of a better way to learn how to convert
natural language word problems to a formal language suitable for entering
into a calculator at the level of an average human adult.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] Language learning (was Re: Defining AGI)

2008-10-21 Thread Dr. Matthias Heger
There is another point which indicates that the ability to understand
language or to learn language does not imply *general* intelligence.

You can often observe in school that linguistic talents are poor in
mathematics and vice versa. 

- Matthias





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] Language learning (was Re: Defining AGI)

2008-10-21 Thread Dr. Matthias Heger
 

I agree. But the vaguely-human-mind-like-architecture is a huge additional
assumption.

 

If you have a system that can solve problem x and has in addition a
human-mind-like-architecture then obviously you obtain AGI for *any* x. 

 

The whole AGI-completeness would essentially depend on this additional
assumption. 

A human-mind-like-architecture  even would imply the ability to learn
natural language understanding 

 

- Matthias

Ben wrote


I wouldn't argue that any software system capable of learning human
language, would necessarily be able to learn mathematics

However, I strongly suspect that any software system **with a vaguely
human-mind-like architecture** that is capable of learning human language,
would also be able to learn basic mathematics

ben

On Tue, Oct 21, 2008 at 2:30 AM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:

Sorry, but this was no proof that a natural language understanding system is
necessarily able to solve the equation x*3 = y for arbitrary y.

1) You have not shown that a language understanding system must
necessarily(!) have made statistical experiences on the equation x*3 =y.

2) you give only a few examples. For a proof of the claim, you have to prove
it for every(!) y.

3) you apply rules such as 5 * 7 = 35 - 35 / 7 = 5 but you have not shown
that
3a) that a language understanding system necessarily(!) has this rules
3b) that a language understanding system necessarily(!) can apply such rules

In my opinion a natural language understanding system must have a lot of
linguistic knowledge.
Furthermore a system which can learn natural languages must be able to gain
linguistic knowledge.

But both systems do not have necessarily(!) the ability to *work* with this
knowledge as it is essential for AGI.

And for this reason natural language understanding is not AGI complete at
all.

-Matthias



-Ursprüngliche Nachricht-
Von: Matt Mahoney [mailto:[EMAIL PROTECTED]
Gesendet: Dienstag, 21. Oktober 2008 05:05
An: agi@v2.listbox.com
Betreff: [agi] Language learning (was Re: Defining AGI)



--- On Mon, 10/20/08, Dr. Matthias Heger [EMAIL PROTECTED] wrote:

 For instance, I doubt that anyone can prove that
 any system which understands natural language is
 necessarily able to solve
 the simple equation x *3 = y for a given y.

It can be solved with statistics. Take y = 12 and count Google hits:

string count
-- -
1x3=12 760
2x3=12 2030
3x3=12 9190
4x3=12 16200
5x3=12 1540
6x3=12 1010

More generally, people learn algebra and higher mathematics by induction, by
generalizing from lots of examples.

5 * 7 = 35 - 35 / 7 = 5
4 * 6 = 24 - 24 / 6 = 4
etc...
a * b = c - c = b / a

It is the same way we learn grammatical rules, for example converting active
to passive voice and applying it to novel sentences:

Bob kissed Alice - Alice was kissed by Bob.
I ate dinner - Dinner was eaten by me.
etc...
SUBJ VERB OBJ - OBJ was VERB by SUBJ.

In a similar manner, we can learn to solve problems using logical deduction:

All frogs are green. Kermit is a frog. Therefore Kermit is green.
All fish live in water. A shark is a fish. Therefore sharks live in water.
etc...

I understand the objection to learning math and logic in a language model
instead of coding the rules directly. It is horribly inefficient. I estimate
that a neural language model with 10^9 connections would need up to 10^18
operations to learn simple arithmetic like 2+2=4 well enough to get it right
90% of the time. But I don't know of a better way to learn how to convert
natural language word problems to a formal language suitable for entering
into a calculator at the level of an average human adult.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:

https://www.listbox.com/member/? https://www.listbox.com/member/?; 

Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?
https://www.listbox.com/member/?; 
Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome   - Dr Samuel Johnson



  _  


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?;
7 Modify Your Subscription

 http://www.listbox.com 

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret

AW: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Dr. Matthias Heger
Marc Walser wrote


Science is done by comparing hypotheses to data.  Frequently, the fastest 
way to handle a hypothesis is to find a counter-example so that it can be 
discarded (or extended appropriately to handle the new case).  How is asking

for a counter-example unscientific?


Before you ask for counter examples you should *first* give some arguments
which supports your hypothesis. This was my point. If everyone would make
wild hypotheses and ask other scientists to find counter-examples then we
would end up in a explosion of number of hypotheses. But if you would first
show some arguments which support your hypothesis then you give reasons to
the scientific community why it is worth to use some time to think about the
hypothesis. Regarding your example with Darwin: Darwin had gathered signs of
evidence which supports his hypothesis *first* .



First, I'd appreciate it if you'd drop the strawman.  You are the only one 
who keeps insisting that anything is easy.


Is this a scientific discussion from you? No. You use rhetoric and nothing
else.
I don't say that anything is easy. 


Second, my hypothesis is more correctly stated that the pre-requisites for a

natural language understanding system are necessary and sufficient for a 
scientist because both are AGI-complete.  Again, I would appreciate it if 
you could correctly represent it in the future.


This is the first time you speak about pre-requisites. Your whole hypothesis
changes with these pre-requisites. If you would be scientific you would
qualify what are your pre-requisites.


So, for simplicity, why don't we just say
scientist = understanding


Understanding does not imply the ability to create something new or to apply
knowledge. 
Furthermore natural language understanding does not imply understanding
*general* domains. There is much evidence that the ability to understand
natural language does not imply to the understanding of mathematics. Not to
speak from the ability to create mathematics.


Now, for a counter-example to my initial hypothesis, why don't you explain 
how you can have natural language understanding without understanding (which

equals scientist ;-).


Understanding does not equal scientist. 
The claim that natural language understanding needs understanding is
trivial. This wasn't your initial hypothesis.






- Original Message - 
From: Dr. Matthias Heger [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, October 20, 2008 5:00 PM
Subject: AW: AW: AW: [agi] Re: Defining AGI


If MW would be scientific then he would not have asked Ben to prove that MWs
hypothesis is wrong.
The person who has to prove something is the person who creates the
hypothesis.
And MW has given not a tiny argument for his hypothesis that a natural
language understanding system can easily be a scientist.

-Matthias

-Ursprüngliche Nachricht-
Von: Eric Burton [mailto:[EMAIL PROTECTED]
Gesendet: Montag, 20. Oktober 2008 22:48
An: agi@v2.listbox.com
Betreff: Re: AW: AW: [agi] Re: Defining AGI

 You and MW are clearly as philosophically ignorant, as I am in AI.

But MW and I have not agreed on anything.

Hence the wiki entry on scientific method:
Scientific method is not a recipe: it requires intelligence, imagination,
and creativity
http://en.wikipedia.org/wiki/Scientific_method
This is basic stuff.

And this is fundamentally what I was trying to say.

I don't think of myself as philosophically ignorant. I believe
you've reversed the intention of my post. It's probably my fault for
choosing my words poorly. I could have conveyed the nuances of the
argument better as I understood them. Next time!


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: AW: [agi] Language learning (was Re: Defining AGI)

2008-10-21 Thread Dr. Matthias Heger
Andi wrote

This really seems more like arguing that there is no such thing as
AI-complete at all.  That is certainly a possibility.  It could be that
there are only different competences.  This would also seem to mean that
there isn't really anything that is truly general about intelligence,
which is again possible.

No. This arguing shows that there are very basic features which do not imply
necessarily from natural language understanding:

Usage of knowledge.
The example to solve a equation is just one of many examples. 
If you can talk about things this does not imply that you can do things.



I guess one thing we're seeing here is a basic example of mathematics as
having underlying separate mechanisms from other features of language. 
The Lakoff and Nunez talk about subitizing (judging small numbers of
things at a glance) as one core competancy, and counting as another. 
These are things you can see in animals that do not use language.  So,
sure, mathematics could be a separate realm of intelligence.


It is not just mathematics. A natural language understanding system can talk
about shopping. But from this ability you can't prove that it can do
shopping.
There are essential features of intelligence missing in natural language
understanding. And that's the reason why natural language understanding is
not AGI-complete.



Of course, my response to that is that this kind of basic mathematical
ability is needed to understand language.  



This argumentation is nothing else than making a non-AGI-complete system AGI
complete by adding more and more features.

If you suppose for a arbitrary still unsolved problem P that everything is
which is needed to solve AGI is also necessary to solve P then it becomes
trivial that P is AGI-complete. 

But this argumentation is similar to the doubters of AGI who essentially
suppose for an arbitrary given still unsolved problem P that P is not
computable at all.






-Matthias


Matthias wrote:
 Sorry, but this was no proof that a natural language understanding system
 is
 necessarily able to solve the equation x*3 = y for arbitrary y.

 1) You have not shown that a language understanding system must
 necessarily(!) have made statistical experiences on the equation x*3 =y.

 2) you give only a few examples. For a proof of the claim, you have to
 prove
 it for every(!) y.

 3) you apply rules such as 5 * 7 = 35 - 35 / 7 = 5 but you have not shown
 that
 3a) that a language understanding system necessarily(!) has this rules
 3b) that a language understanding system necessarily(!) can apply such
 rules

 In my opinion a natural language understanding system must have a lot of
 linguistic knowledge.
 Furthermore a system which can learn natural languages must be able to
 gain
 linguistic knowledge.

 But both systems do not have necessarily(!) the ability to *work* with
 this
 knowledge as it is essential for AGI.

 And for this reason natural language understanding is not AGI complete at
 all.

 -Matthias



 -Ursprüngliche Nachricht-
 Von: Matt Mahoney [mailto:[EMAIL PROTECTED]
 Gesendet: Dienstag, 21. Oktober 2008 05:05
 An: agi@v2.listbox.com
 Betreff: [agi] Language learning (was Re: Defining AGI)


 --- On Mon, 10/20/08, Dr. Matthias Heger [EMAIL PROTECTED] wrote:

 For instance, I doubt that anyone can prove that
 any system which understands natural language is
 necessarily able to solve
 the simple equation x *3 = y for a given y.

 It can be solved with statistics. Take y = 12 and count Google hits:

 string count
 -- -
 1x3=12 760
 2x3=12 2030
 3x3=12 9190
 4x3=12 16200
 5x3=12 1540
 6x3=12 1010

 More generally, people learn algebra and higher mathematics by induction,
 by
 generalizing from lots of examples.

 5 * 7 = 35 - 35 / 7 = 5
 4 * 6 = 24 - 24 / 6 = 4
 etc...
 a * b = c - c = b / a

 It is the same way we learn grammatical rules, for example converting
 active
 to passive voice and applying it to novel sentences:

 Bob kissed Alice - Alice was kissed by Bob.
 I ate dinner - Dinner was eaten by me.
 etc...
 SUBJ VERB OBJ - OBJ was VERB by SUBJ.

 In a similar manner, we can learn to solve problems using logical
 deduction:

 All frogs are green. Kermit is a frog. Therefore Kermit is green.
 All fish live in water. A shark is a fish. Therefore sharks live in water.
 etc...

 I understand the objection to learning math and logic in a language model
 instead of coding the rules directly. It is horribly inefficient. I
 estimate
 that a neural language model with 10^9 connections would need up to 10^18
 operations to learn simple arithmetic like 2+2=4 well enough to get it
 right
 90% of the time. But I don't know of a better way to learn how to convert
 natural language word problems to a formal language suitable for entering
 into a calculator at the level of an average human adult.

 -- Matt Mahoney, [EMAIL PROTECTED]



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed

AW: AW: AW: [agi] Re: Defining AGI

2008-10-21 Thread Dr. Matthias Heger
Mark Waser answered to 

 I don't say that anything is easy.

:

Direct quote cut and paste from *your* e-mail . . . .
--
From: Dr. Matthias Heger
To: agi@v2.listbox.com
Sent: Sunday, October 19, 2008 2:19 PM
Subject: AW: AW: [agi] Re: Defining AGI


The process of translating patterns into language should be easier than the 
process of creating patterns or manipulating patterns. Therefore I say that 
language understanding is easy.

--





Clearly you DO say that language understanding is easy.


Your claim was that I have said that *anything* is easy.
This is a wrong generalization which is usually known in rhetoric.


I think, often you are less scientific than those people who you blame not
to be scientific.
I will give up to discuss with you.









---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] natural language - algebra (was Defining AGI)

2008-10-21 Thread Dr. Matthias Heger
 

 



Here's my simple proof: algebra, or any other formal language for that
matter, is expressible in natural language, if inefficiently.

Words like quantity, sum, multiple, equals, and so on, are capable of
conveying the same meaning that the sentence x*3 = y conveys. The rules
for manipulating equations are likewise expressible in natural language.

Thus it is possible in principle to do algebra without learning the
mathematical symbols. Much more difficult for human minds perhaps, but
possible in principle. Thus, learning mathematical formalism via translation
from natural language concepts is possible (which is how we do it, after
all). Therefore, an intelligence that can learn natural language can learn
to do math.



The problem is not to learn the equations or the symbols.

The point is that a system which is able to understand and learn linguistic
knowledge  is not necessarily able to use and apply its knowledge  to  solve
problems.

 

- Matthias

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


[agi] If your AGI can't learn to play chess it is no AGI

2008-10-21 Thread Dr. Matthias Heger
It seems to me that many people think that embodiment is very important for
AGI.

For instance some people seem to believe that you can't be a good
mathematician if you haven't made some embodied experience.

 

But this would have a rather strange consequence:

If you give your AGI a difficult mathematical problem to solve, then it
would answer:

 

Sorry, I still cannot solve your problem, but let me walk with my body
through the virtual world. 

Hopefully, I will then understand your mathematical question end even more
hopefully I will be able to solve it after some further embodied
experience.

 

AGI is the ability to solve different problems in different domains. But
such an AGI would need to make experiences in domain d1 in order to solve
problems of domain d2. Does this really make sense, if every information
necessary to solve problems of d2 is in d2? I think an AGI which has to make
experiences in d1 in order to solve a problem of domain d2 which contains
everything to solve this problem is no AGI. How should such an AGI know what
experiences in d1 are necessary to solve the problem of d2?

 

In my opinion a real AGI must be able to solve a problem of a domain d
without leaving this domain if in this domain there is everything to solve
this problem.

 

From this we can define a simple benchmark which is not sufficient for AGI
but which is *necessary* for a system to be an AGI system:

 

Within the domain of chess there is everything to know about chess. So if it
comes up to be a good chess player

learning chess from playing chess must be sufficient. Thus, an AGI which is
not able to enhance its abilities in chess from playing chess alone is no
AGI.  

 

Therefore, my first steps in the roadmap towards AGI would be the following:

1.   Make a concept for your architecture of your AGI

2.   Implement the software for your AGI

3.   Try if your AGI is able to become a good chess player from learning
in the domain of chess alone.

4.   If your AGI can't even learn to play good chess then it is no AGI
and it would be a waste of time to make experiences with your system in more
complex domains.

 

-Matthias

 

 

 

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: AW: AW: [agi] Re: Defining AGI

2008-10-20 Thread Dr. Matthias Heger
Any argument of the kind you should better first read xxx + yyy +.  is
very weak. It is a pseudo killer argument against everything with no content
at all.

If  xxx , yyy . contains  really relevant information for the discussion
then it should be possible to quote the essential part with few lines of
text.

If someone is not able to do this he should himself better read xxx, yyy, .
once again.

 

-Matthias

 

 

Ben wrote

 

It would also be nice if this mailing list could be operate on a bit more of
a scientific basis.  I get really tired of pointing to specific references
and then being told that I have no facts or that it was solely my opinion.

 


This really has to do with the culture of the community on the list, rather
than the operation of the list per se, I'd say.

I have also often been frustrated by the lack of inclination of some list
members to read the relevant literature.  Admittedly, there is a lot of it
to read.  But on the other hand, it's not reasonable to expect folks who
*have* read a certain subset of the literature, to summarize that subset in
emails for individuals who haven't taken the time.  Creating such summaries
carefully takes a lot of effort.

I agree that if more careful attention were paid to the known science
related to AGI ... and to the long history of prior discussions on the
issues discussed here ... this list would be a lot more useful.

But, this is not a structured discussion setting -- it's an Internet
discussion group, and even if I had the inclination to moderate more
carefully so as to try to encourage a more carefully scientific mode of
discussion, I wouldn't have the time...

ben g



  _  


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?;
7 Modify Your Subscription

 http://www.listbox.com 

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: AW: AW: AW: [agi] Re: Defining AGI

2008-10-20 Thread Dr. Matthias Heger
Terren wrote


Language understanding requires a sophisticated conceptual framework
complete with causal models, because, whatever meaning means, it must be
captured somehow in an AI's internal models of the world.


Conceptual framework is not well defined. Therefore I can't agree or
disagree.
What do you mean with causal model?



The Piraha tribe in the Amazon basin has a very primitive language compared
to all modern languages - it has no past or future tenses, for example - and
as a people they exhibit barely any of the hallmarks of abstract reasoning
that are so common to the rest of humanity, such as story-telling, artwork,
religion... see http://en.wikipedia.org/wiki/Pirah%C3%A3_people. 


How do you explain that?


In this example we observe two phenomena:
1. primitive language compared to all modern languages
2. and as a people they exhibit barely any of the hallmarks of abstract
reasoning

From this we can neither conclude that 1 causes 2 nor that 2 causes 1.



I'm saying that if an AI understands  speaks natural language, you've
solved AGI - your Nobel will be arriving soon.  


This is just your opinion. I disagree that natural language understanding
necessarily implies AGI. For instance, I doubt that anyone can prove that
any system which understands natural language is necessarily able to solve
the simple equation x *3 = y for a given y.
And if this is not proven then we shouldn't assume that natural language
understanding without hidden further assumptions implies AGI.



The difference between AI1 that understands Einstein, and any AI currently
in existence, is much greater then the difference between AI1 and Einstein.


This might be true but what does this  show?




Sorry, I don't see that, can you explain the proof?  Are you saying that
sign language isn't natural language?  That would be patently false. (see
http://crl.ucsd.edu/signlanguage/)


Yes. In my opinion, sign language is no natural language as it is usually
understood.




So you're agreeing that language is necessary for self-reflectivity. In your
models, then, self-reflectivity is not important to AGI, since you say AGI
can be realized without language, correct?


No. Self-reflectifity needs just a feedback loop for  own processes. I do
not say that AGI can be realized without language. AGI must produce outputs
and AGI must obtain inputs. For inputs and outputs there must be protocols.
These protocols are not fixed but  depend on the input devices on output
devices. For instance the AGI could use the hubble telescope or a microscope
or both. 
For the domain of mathematics a formal language which is specified by humans
would be 
the best for input and output. 


I'm not saying that language is inherently involved in thinking, but it is
crucial for the development of *sophisticated* causal models of the world -
the kind of models that can support self-reflectivity. Word-concepts form
the basis of abstract symbol manipulation.

That gets the ball rolling for humans, but the conceptual framework that
emerges is not necessarily tied to linguistics, especially as humans get
feedback from the world in ways that are not linguistic (scientific
experimentation/tinkering, studying math, art, music, etc).


That is just your opinion again. I tolerate your opinion. But I have a
different opinion. The future will show which approach is successful.

- Matthias



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] Re: Value of philosophy

2008-10-20 Thread Dr. Matthias Heger
I think in the past there were always difficult technological problems
leading to a conceptual controversy how to solve these problems. Time has
always shown which approaches were successful and which were not successful.

The fact, that we have so many philosophical discussions show that we still
are at the beginning. There is still no real evidence for a certain AGI
approach to be a successful approach. Sorry, this is just my opinion. 

And this is the only(!!) reason why AGI doubters can still survive.

I am no AGI doubter at all. In my opinion a lot of people want to make
things more complicated than they are.

AGI is possible! Proof: We exist.

AGI is easy! Proof: Our genome is less than 1GB, i.e. less than your
USB-stick. How much is need for our brain? Probably Windows Vista needs more
memory than AGI.

We always have to think about the huge computational and memory resources of
the brain with massively concurrent computing.
We can therefore assume that a lot of mythical things like creativity are
nothing else than brute force giant database phenomena of the brain.
Especially, before there isn't any evidence that things must be complicated
we should assume that they are easy.

The AGI community suffers from its own main assumption that AGI is
difficult. 
For instance, things like Gödel' theorem  etc are of no relevance at all.
All we want to build is a finite system with a maximum number of
applications. Gödel says absolutely nothing against this goal.

Further problem: AGI approaches are often too much anthropomorphized
approaches. (embodiment, natural language ,... sorry).

-  Matthias



We need to work more on the foundations, to understand whether we are
going in the right direction on at least good enough level to persuade
other people (which is NOT good enough in itself, but barring that,
who are we kidding).

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: AW: AW: [agi] Re: Defining AGI

2008-10-20 Thread Dr. Matthias Heger
If MW would be scientific then he would not have asked Ben to prove that MWs
hypothesis is wrong.
The person who has to prove something is the person who creates the
hypothesis.
And MW has given not a tiny argument for his hypothesis that a natural
language understanding system can easily be a scientist.

-Matthias

-Ursprüngliche Nachricht-
Von: Eric Burton [mailto:[EMAIL PROTECTED] 
Gesendet: Montag, 20. Oktober 2008 22:48
An: agi@v2.listbox.com
Betreff: Re: AW: AW: [agi] Re: Defining AGI

 You and MW are clearly as philosophically ignorant, as I am in AI.

But MW and I have not agreed on anything.

Hence the wiki entry on scientific method:
Scientific method is not a recipe: it requires intelligence, imagination,
and creativity
http://en.wikipedia.org/wiki/Scientific_method
This is basic stuff.

And this is fundamentally what I was trying to say.

I don't think of myself as philosophically ignorant. I believe
you've reversed the intention of my post. It's probably my fault for
choosing my words poorly. I could have conveyed the nuances of the
argument better as I understood them. Next time!


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: AW: AW: AW: AW: [agi] Re: Defining AGI

2008-10-20 Thread Dr. Matthias Heger




A conceptual framework starts with knowledge representation. Thus a symbol S 
refers to a persistent pattern P which is, in some way or another, a reflection 
of the agent's environment and/or a composition of other symbols. Symbols are 
related to each other in various ways. These relations (such as, is a property 
of, contains, is associated with) are either given or emerge in some kind 
of self-organizing dynamic.

A causal model M is a set of symbols such that the activation of symbols 
S1...Sn are used to infer the future activation of symbol S'. The rules of 
inference are either given or emerge in some kind of self-organizing dynamic.

A conceptual framework refers to the whole set of symbols and their relations, 
which includes all causal models and rules of inference.

Such a framework is necessary for language comprehension because meaning is 
grounded in that framework. For example, the word 'flies' has at least two 
totally distinct meanings, and each is unambiguously evoked only when given the 
appropriate conceptual context, as in the classic example time flies like an 
arrow; fruit flies like a banana.  time and fruit have very different sets 
of relations to other patterns, and these relations can in principle be 
employed to disambiguate the intended meaning of flies and like.

If you think language comprehension is possible with just statistical methods, 
perhaps you can show how they would work to disambiguate the above example.




I agree with your framework but it is in my approach a part of nonlinguistic D 
which is separated from L. D and L interact only during the process of 
translation but even in this process D and L are separated.





OK, let's look at all 3 cases:

1. Primitive language *causes* reduced abstraction faculties
2. Reduced abstraction faculties *causes* primitive language
3. Primitive language and reduced abstraction faculties are merely correlated; 
neither strictly causes the other

I've been arguing for (1), saying that language and intelligence are 
inseparable (for social intelligences). The sophistication of one's language 
bounds the sophistication of one's conceptual framework. 

In (2), one must be saying with the Piraha that they are cognitively deficient 
for another reason, and their language is primitive as a result of that 
deficiency. Professor Daniel Everett, the anthropological linguist who first 
described the Piraha grammar, dismissed this possibility in his paper Cultural 
Constraints on Grammar and Cognition in Piraha˜ (see 
http://www.eva.mpg.de/psycho/pdf/Publications_2005_PDF/Commentary_on_D.Everett_05.pdf):

... [the idea that] the Piraha˜ are sub-
standard mentally—is easily disposed of. The source
of this collective conceptual deficit could only be ge-
netics, health, or culture. Genetics can be ruled out
because the Piraha˜ people (according to my own ob-
servations and Nimuendajú’s have long intermarried
with outsiders. In fact, they have intermarried to the
extent that no well-defined phenotype other than stat-
ure can be identified. Piraha˜s also enjoy a good and
varied diet of fish, game, nuts, legumes, and fruits, so
there seems to be no dietary basis for any inferiority.
We are left, then, with culture, and here my argument
is exactly that their grammatical differences derive
from cultural values. I am not, however, making a
claim about Piraha˜ conceptual abilities but about their
expression of certain concepts linguistically, and this
is a crucial difference.

This quote thus also addresses (3), that the language and the conceptual 
deficiency are merely correlated. Everett seems to be arguing for this point, 
that their language and conceptual abilities are both held back by their 
culture. There are questions about the dynamic between culture and language, 
but that's all speculative.

I realize this leaves the issue unresolved. I include it because I raised the 
Piraha example and it would be disingenuous of me to not mention Everett's 
interpretation.



Everett's interpretation is that culture is responsible for reduced abstraction 
facilities. I agree with this. But this does not imply your claim (1) that 
language causes the reduced facilities. The reduced number of cultural 
experiences in which abstraction is important is responsible for the reduced 
abstraction facilities.

 


Of course, but our opinions have consequences, and in debating the consequences 
we may arrive at a situation in which one of our positions appears absurd, 
contradictory, or totally improbable. That is why we debate about what is 
ultimately speculative, because sometimes we can show the falsehood of a 
position without empirical facts.

On to your example. The ability to do algebra is hardly a test of general 
intelligence, as software like Mathematica can do it. One could say that the 
ability to be *taught* how to do algebra reflects general intelligence, but 
again, that involves learning the *language* of mathematical formalism.



AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Dr. Matthias Heger
The process of outwardly expressing meaning may be fundamental to any social
intelligence but the process itself needs not much intelligence.

Every email program can receive meaning, store meaning and it can express it
outwardly in order to send it to another computer. It even can do it without
loss of any information. Regarding this point, it even outperforms humans
already who have no conscious access to the full meaning (information) in
their brains.

The only thing which needs much intelligence from the nowadays point of view
is the learning of the process of outwardly expressing meaning, i.e. the
learning of language. The understanding of language itself is simple.

To show that intelligence is separated from language understanding I have
already given the example that a person could have spoken with Einstein but
needed not to have the same intelligence. Another example are humans who
cannot hear and speak but are intelligent. They only have the problem to get
the knowledge from other humans since language is the common social
communication protocol to transfer knowledge from brain to brain.

In my opinion language is overestimated in AI for the following reason:
When we think we believe that we think in our language. From this we
conclude that our thoughts are inherently structured by linguistic elements.
And if our thoughts are so deeply connected with language then it is a small
step to conclude that our whole intelligence depends inherently on language.

But this is a misconception.
We do not have conscious control over all of our thoughts. Most of the
activities within our brain we cannot be aware of when we think.
Nevertheless it is very useful and even essential for human intelligence
being able to observe at least a subset of the own thoughts. It is this
subset which we usually identify with the whole set of thoughts. But in fact
it is just a tiny subset of all what happens in the 10^11 neurons.
For the top-level observation of the own thoughts the brain uses the learned
language. 
But this is no contradiction to the point that language is just a
communication protocol and nothing else. The brain translates its patterns
into language and routes this information to its own input regions.

The reason why the brain uses language in order to observe its own thoughts
is probably the following:
If a person A wants to communicate some of its patterns to a person B then
it has solve two problems:
1. How to compress the patterns?
2. How to send the patterns to the person B?
The solution for the two problems is language.

If a brain wants to observe its own thoughts it has to solve the same
problems.
The thoughts have to be compressed. If not you would observe every element
of your thoughts and you would end up in an explosion of complexity. So why
not use the same compression algorithm as it is used for communication with
other people? That's the reason why the brain uses language when it observes
its own thoughts. 

This phenomenon leads to the misconception that language is inherently
connected with thoughts and intelligence. In fact it is just a top level
communication protocol between two brains and within a single brain.

Future AGI will have a much broader bandwidth and even for the current
possibilities of technology human language would be a weak communication
protocol for its internal observation of its own thoughts.
 
- Matthias


Terren Suydam wrote:


Nice post.

I'm not sure language is separable from any kind of intelligence we can
meaningfully interact with.

It's important to note (at least) two ways of talking about language:

1. specific aspects of language - what someone building an NLP module is
focused on (e.g. the rules of English grammar and such).

2. the process of language - the expression of the internal state in some
outward form in such a way that conveys shared meaning. 

If we conceptualize language as in #2, we can be talking about a great many
human activities besides conversing: playing chess, playing music,
programming computers, dancing, and so on. And in each example listed there
is a learning curve that goes from pure novice to halting sufficiency to
masterful fluency, just like learning a language. 

So *specific* forms of language (including the non-linguistic) are not in
themselves important to intelligence (perhaps this is Matthias' point?), but
the process of outwardly expressing meaning is fundamental to any social
intelligence.

Terren




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Dr. Matthias Heger
What the computer makes with the data it receives depends on the information
of the transferred data, its internal algorithms and its internal data.
This is the same with humans and natural language.


Language understanding would be useful to teach the AGI with existing
knowledge already represented in natural language. But natural language
understanding suffers from the problem of ambiguities. These ambiguities can
be solved by having similar knowledge as humans have. But then you have a
recursive problem because first there has to be solved the problem to obtain
this knowledge.

Nature solves this problem with embodiment. Different people make similar
experiences since the laws of nature do not depend on space and time.
Therefore we all can imagine a dog which is angry. Since we have experienced
angry dogs but we haven't experienced angry trees we can resolve the
linguistic ambiguity of my former example and answer the question: Who was
angry?

The way to obtain knowledge with embodiment is hard and long even in virtual
worlds. 
If the AGI shall understand natural language it would be necessary that it
makes similar experiences as humans make in the real world. But this would
need a very very sophisticated and rich virtual world. At least, there have
to be angry dogs in the virtual world ;-) 

As I have already said I do not think the relation between utility of this
approach and the costs would be positive for first AGI.

- Matthias




William Pearson [mailto:[EMAIL PROTECTED] wrote


If I specify in a language to a computer that it should do something,
it will do it no matter what (as long as I have sufficient authority).
Telling a human to do something, e.g. wave your hands in the air and
shout, the human will decide to do that based on how much it trusts
you and whether they think it is a good idea. Generally a good idea in
a situation where you are attracting the attention of rescuers,
otherwise likely to make you look silly.

I'm generally in favour of getting some NLU into AIs mainly because a
lot of the information we have about the world is still in that form,
so an AI without access to that information would have to reinvent it,
which I think would take a long time. Even mathematical proofs are
still somewhat in natural language. Other than that you could work on
machine language understanding where information was taken in
selectively and judged on its merits not its security credentials.

  Will Pearson





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] Re: Meaning, communication and understanding

2008-10-19 Thread Dr. Matthias Heger
I agree that understanding is the process of integrating different models,
different meanings, different pieces of information as seen by your
model. But this integrating just matching and not extending the own model
with new entities. You only match linguistic entities of received
linguistically represented information with existing entities of your model
(i.e. with some of your existing patterns). If you could manage the matching
process successfully then you have understood the linguistic message. 

Natural communication and language understanding is completely comparable
with common processes in computer science. There is an internal data
representation. A subset of this data is translated into a linguistic string
and transferred to another agent which retranslates the message before it
possibly but not necessarily changes its database.

The only reason why natural language understanding is so difficult is
because it needs a lot of knowledge to resolve ambiguities which humans
usually gain via own experience.

But alone from being able to resolve the ambiguities and being able to do
the matching process successfully you will know nothing about the creation
of patterns and the way how to work intelligently with these patterns.
Therefore communication is separated from these main problems of AGI in the
same way as communication is completely separated from the structure and
algorithms of the database of computers.

Only the process of *learning* such a communication would be  AI (I am not
sure if it is AGI). But you cannot learn to communicate if there is nothing
to communicate. So every approach towards AGI via *learning* language
understanding will need at least a further domain for the content of
communication. Probably you need even more domains because the linguistic
ambiguities can resolved only with broad knowledge .

And this is my point why I say that language understanding would yield costs
which are not necessary. We can build AGI just by concentrating all efforts
to a *single* domain with very useful properties (i.e. domain of
mathematics).
This would reduce the immense costs of simulating real worlds and
additionally concentrating on *at least two* domains at the same time.

-Matthias


Vladimir Nesov [mailto:[EMAIL PROTECTED] wrote
 
Gesendet: Sonntag, 19. Oktober 2008 12:59
An: agi@v2.listbox.com
Betreff: [agi] Re: Meaning, communication and understanding

On Sun, Oct 19, 2008 at 11:58 AM, Dr. Matthias Heger [EMAIL PROTECTED]
wrote:
 The process of outwardly expressing meaning may be fundamental to any
social
 intelligence but the process itself needs not much intelligence.

 Every email program can receive meaning, store meaning and it can express
it
 outwardly in order to send it to another computer. It even can do it
without
 loss of any information. Regarding this point, it even outperforms humans
 already who have no conscious access to the full meaning (information) in
 their brains.

 The only thing which needs much intelligence from the nowadays point of
view
 is the learning of the process of outwardly expressing meaning, i.e. the
 learning of language. The understanding of language itself is simple.


Meaning is tricky business. As far as I can tell, meaning Y of a
system X is an external model that relates system X to its meaning Y
(where meaning may be a physical object, or a class of objects, where
each individual object figures into the model). Formal semantics works
this way (see http://en.wikipedia.org/wiki/Denotational_semantics ).
When you are thinking about an object, the train of though depends on
your experience about that object, and will influence your behavior in
situations depending on information about that objects. Meaning
propagates through the system according to rules of the model,
propagates inferentially in the model and not in the system, and so
can reach places and states of the system not at all obviously
concerned with what this semantic model relates them to. And
conversely, meaning doesn't magically appear where model doesn't say
it does: if system is broken, meaning is lost, at least until you come
up with another model and relate it to the previous one.

When you say that e-mail contains meaning and network transfers
meaning, it is an assertion about the model of content of e-mail, that
relates meaning in the mind of the writer to bits in the memory of
machines. From this point of view, we can legitemately say that
meaning is transferred, and is expressed. But the same meaning doesn't
exist in e-mails if you cut them from the mind that expressed the
meaning in the form of e-mails, or experience that transferred meaning
in the mind.

Understanding is the process of integrating different models,
different meanings, different pieces of information as seen by your
model. It is the ability to translate pieces of information that have
nontrivial structure, in your basis. Normal use of understanding
applies only to humans, everything else generalizes

AW: [agi] Words vs Concepts [ex Defining AGI]

2008-10-19 Thread Dr. Matthias Heger
For the discussion of the subject the details of the pattern representation
are not important at all. It is sufficient if you agree that a spoken
sentence represent a certain set of patterns which are translated into the
sentence. The receiving agent retranslates the sentence and matches the
content with its model by activating similar patterns.

The activation of patterns is extremely fast and happens in real time. The
brain even predicts patterns if it just hears the first syllable of a word:

http://www.rochester.edu/news/show.php?id=3244

There is no creation of new patterns and there is no intelligent algorithm
which manipulates patterns. It is just translating, sending, receiving and
retranslating.

From the ambiguities of natural language you obtain some hints about the
structure of the patterns. But you cannot even expect to obtain all detail
of these patterns by understanding the process of language understanding.
There will be probably many details within these patterns which are only
necessary for internal calculations. 
These details will be not visible from the linguistic point of view. Just
think about communicating computers and you will know what I mean.


- Matthias

Mike Tintner [mailto:[EMAIL PROTECTED] wrote:

Matthias,

You seem - correct me - to be going a long way round saying that words are 
different from concepts - they're just sound-and-letter labels for concepts,

which have a very different form. And the processing of words/language is 
distinct from and relatively simple compared to the processing of the 
underlying concepts.

So take

THE CAT SAT ON THE MAT

or

THE MIND HAS ONLY CERTAIN PARTS WHICH ARE SENTIENT

or

THE US IS THE HOME OF THE FINANCIAL CRISIS

the words c-a-t or m-i-n-d or U-S  or f-i-n-a-n-c-i-a-l c-r-i-s-i-s 
are distinct from the underlying concepts. The question is: What form do 
those concepts take? And what is happening in our minds (and what has to 
happen in any mind) when we process those concepts?

You talk of patterns. What patterns, do you think, form the concept of 
mind that are engaged in thinking about sentence 2? Do you think that 
concepts like mind or the US might involve something much more complex 
still? Models? Or is that still way too simple? Spaces?

Equally, of course, we can say that each *sentence* above is not just a 
verbal composition but a conceptual composition - and the question then 
is what form does such a composition take? Do sentences form, say, a 
pattern of patterns,  or something like a picture? Or a blending of 
spaces ?

Or are concepts like *money*?

YOU CAN BUY A LOT WITH A MILLION DOLLARS

Does every concept function somewhat like money, e.g. a million dollars - 
something that we know can be cashed in, in an infinite variety of ways, but

that we may not have to start cashing in,  (when processing), unless 
really called for - or only cash in so far?

P.S. BTW this is the sort of psycho-philosophical discussion that I would 
see as central to AGI, but that most of you don't want to talk about?





Matthias: What the computer makes with the data it receives depends on the 
information
 of the transferred data, its internal algorithms and its internal data.
 This is the same with humans and natural language.


 Language understanding would be useful to teach the AGI with existing
 knowledge already represented in natural language. But natural language
 understanding suffers from the problem of ambiguities. These ambiguities 
 can
 be solved by having similar knowledge as humans have. But then you have a
 recursive problem because first there has to be solved the problem to 
 obtain
 this knowledge.

 Nature solves this problem with embodiment. Different people make similar
 experiences since the laws of nature do not depend on space and time.
 Therefore we all can imagine a dog which is angry. Since we have 
 experienced
 angry dogs but we haven't experienced angry trees we can resolve the
 linguistic ambiguity of my former example and answer the question: Who was
 angry?

 The way to obtain knowledge with embodiment is hard and long even in 
 virtual
 worlds.
 If the AGI shall understand natural language it would be necessary that it
 makes similar experiences as humans make in the real world. But this would
 need a very very sophisticated and rich virtual world. At least, there 
 have
 to be angry dogs in the virtual world ;-)

 As I have already said I do not think the relation between utility of this
 approach and the costs would be positive for first AGI.





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your 

AW: [agi] Words vs Concepts [ex Defining AGI]

2008-10-19 Thread Dr. Matthias Heger
The process of changing the internal model does not belong to language
understanding.
Language understanding ends if the matching process is finished. Language
understanding can be strictly separated conceptually from creation and
manipulation of patterns as you can separate the process of communication
with the process of manipulating the database in a computer.
You can see it differently but then everything is only a discussion about
definitions.

- Matthias


Mark Waser [mailto:[EMAIL PROTECTED] wrote

Gesendet: Sonntag, 19. Oktober 2008 19:00
An: agi@v2.listbox.com
Betreff: Re: [agi] Words vs Concepts [ex Defining AGI]

 There is no creation of new patterns and there is no intelligent algorithm
 which manipulates patterns. It is just translating, sending, receiving and
 retranslating.

This is what I disagree entirely with.  If nothing else, humans are 
constantly building and updating their mental model of what other people 
believe and how they communicate it.  Only in routine, pre-negotiated 
conversations can language be entirely devoid of learning.  Unless a 
conversation is entirely concrete and based upon something like shared 
physical experiences, it can't be any other way.  You're only paying 
attention to the absolutely simplest things that language does (i.e. the tip

of the iceberg).






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] Words vs Concepts [ex Defining AGI]

2008-10-19 Thread Dr. Matthias Heger
If there are some details of the internal structure of patterns visible then
this is no proof at all that there are not also details of the structure
which are completely hidden from the linguistic point of view. 

Since in many communicating technical systems there are so much details
which are not transferred I would bet that this is also the case in humans.

As long as we have no proof this remains an open question. An AGI which may
have internal features for its patterns would have less restrictions and is
thus far easier to build.

- Matthias.


Mark Waser [mailto:[EMAIL PROTECTED] wrote

Read Pinker's The Stuff of Thought.  Actually, a lot of these details *are* 
visible from a linguistic point of view.





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Dr. Matthias Heger
 in tractable domains before attempting to extend that
knowledge to larger domains.

 

- Original Message - 

From: David Hart mailto:[EMAIL PROTECTED]  

To: agi@v2.listbox.com 

Sent: Sunday, October 19, 2008 5:30 AM

Subject: Re: AW: [agi] Re: Defining AGI

 


An excellent post, thanks!

IMO, it raises the bar for discussion of language and AGI, and should be
carefully considered by the authors of future posts on the topic of language
and AGI. If the AGI list were a forum, Matthias's post should be pinned!

-dave

On Sun, Oct 19, 2008 at 6:58 PM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:

The process of outwardly expressing meaning may be fundamental to any social
intelligence but the process itself needs not much intelligence.

Every email program can receive meaning, store meaning and it can express it
outwardly in order to send it to another computer. It even can do it without
loss of any information. Regarding this point, it even outperforms humans
already who have no conscious access to the full meaning (information) in
their brains.

The only thing which needs much intelligence from the nowadays point of view
is the learning of the process of outwardly expressing meaning, i.e. the
learning of language. The understanding of language itself is simple.

To show that intelligence is separated from language understanding I have
already given the example that a person could have spoken with Einstein but
needed not to have the same intelligence. Another example are humans who
cannot hear and speak but are intelligent. They only have the problem to get
the knowledge from other humans since language is the common social
communication protocol to transfer knowledge from brain to brain.

In my opinion language is overestimated in AI for the following reason:
When we think we believe that we think in our language. From this we
conclude that our thoughts are inherently structured by linguistic elements.
And if our thoughts are so deeply connbected with language then it is a
small
step to conclude that our whole intelligence depends inherently on language.

But this is a misconception.
We do not have conscious control over all of our thoughts. Most of the
activities within our brain we cannot be aware of when we think.
Nevertheless it is very useful and even essential for human intelligence
being able to observe at least a subset of the own thoughts. It is this
subset which we usually identify with the whole set of thoughts. But in fact
it is just a tiny subset of all what happens in the 10^11 neurons.
For the top-level observation of the own thoughts the brain uses the learned
language.
But this is no contradiction to the point that language is just a
communication protocol and nothing else. The brain translates its patterns
into language and routes this information to its own input regions.

The reason why the brain uses language in order to observe its own thoughts
is probably the following:
If a person A wants to communicate some of its patterns to a person B then
it has solve two problems:
1. How to compress the patterns?
2. How to send the patterns to the person B?
The solution for the two problems is language.

If a brain wants to observe its own thoughts it has to solve the same
problems.
The thoughts have to be compressed. If not you would observe every element
of your thoughts and you would end up in an explosion of complexity. So why
not use the same compression algorithm as it is used for communication with
other people? That's the reason why the brain uses language when it observes
its own thoughts.

This phenomenon leads to the misconception that language is inherently
connected with thoughts and intelligence. In fact it is just a top level
communication protocol between two brains and within a single brain.

Future AGI will have a much broader bandwidth and even for the current
possibilities of technology human language would be a weak communication
protocol for its internal observation of its own thoughts.

- Matthias

 

 

  _  


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?; Modify Your Subscription 

 http://www.listbox.com 

  _  


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?;
7 Modify Your Subscription

 http://www.listbox.com 

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Dr. Matthias Heger

Terren wrote:


Isn't the *learning* of language the entire point? If you don't have an
answer for how an AI learns language, you haven't solved anything.  The
understanding of language only seems simple from the point of view of a
fluent speaker. Fluency however should not be confused with a lack of
intellectual effort - rather, it's a state in which the effort involved is
automatic and beyond awareness.

I don't think that learning of language is the entire point. If I have only
learned language I still cannot create anything. A human who can understand
language is by far still no good scientist. Intelligence means the ability
to solve problems. Which problems can a system solve if it can nothing else
than language understanding?

Einstein had to express his (non-linguistic) internal insights in natural
language and in mathematical language.  In both modalities he had to use
his intelligence to make the translation from his mental models. 

The point is that someone else could understand Einstein even if he haven't
had the same intelligence. This is a proof that understanding AI1 does not
necessarily imply to have the intelligence of AI1. 

Deaf people speak in sign language, which is only different from spoken
language in superficial ways. This does not tell us much about language
that we didn't already know.

But it is a proof that *natural* language understanding is not necessary for
human-level intelligence.
 
It is surely true that much/most of our cognitive processing is not at all
linguistic, and that there is much that happens beyond our awareness.
However, language is a necessary tool, for humans at least, to obtain a
competent conceptual framework, even if that framework ultimately
transcends the linguistic dynamics that helped develop it. Without language
it is hard to see how humans could develop self-reflectivity. 

I have already outlined the process of self-reflectivity: Internal patterns
are translated into language. This is routed to the brain's own input
regions. You *hear* your own thoughts and have the illusion that you think
linguistically.
If you can speak two languages then you can make an easy test: Try to think
in the foreign language. It works. If language would be inherently involved
in the process of thoughts then thinking alternatively in two languages
would cost many resources of the brain. In fact you need just use the other
module for language translation. This is a big hint that language and
thoughts do not have much in common.

-Matthias





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] Words vs Concepts [ex Defining AGI]

2008-10-19 Thread Dr. Matthias Heger
Mark Waser wrote

What if the matching process is not finished?
This is overly simplistic for several reasons since you're apparently 
assuming that the matching process is crisp, unambiguous, and irreversible 
(and ask Stephen Reed how well that works for TexAI).

I do not assume this. Why should I?

It *must* be remembered that the internal model for natural language 
includes such critically entwined and constantly changing information as 
what this particular conversation is about, what the speaker knows, and
what 
the speakers motivations are.  The meaning of sentences can change 
tremendously based upon the currently held beliefs about these questions. 
Suddenly realizing that the speaker is being sarcastic generally reverses 
the meaning of statements.  Suddenly realizing that the speaker is using an

analogy can open up tremendous vistas for interpretation and analysis.
Look 
at all the problems that people have parsing sentences.

If I suddenly realize that the speaker is sarcastic than I change my
mappings from linguistic entities to pattern entities. Where is the problem?


The reason why you can separate the process of communication with the 
process of manipulating data in a computer is because *data* is crisp and 
unambiguous.  It is concrete and completely specified as I suggested in my 
initial e-mail.  The model is entirely known and the communication process 
is entirely specified.  None of these things are true of unstructured 
knowledge.

You have given no reason why the separation of the process of communication
with the 
process of manipulating data can only be separated if the knowledge is
structured.
In fact there is no reason.



Language understanding emphatically does not meet these requirements so
your 
analogy doesn't hold.

There are no special requirements.

- Matthias



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] Words vs Concepts [ex Defining AGI]

2008-10-19 Thread Dr. Matthias Heger
We can assume that the speaking human itself is not aware about every
details of its patterns. At least these details would be probably hidden
from communication.

-Matthias

Mark Waser wrote

Details that don't need to be transferred are those which are either known 
by or unnecessary to the recipient.  The former is a guess (unless the 
details were transmitted previously) and the latter is an assumption based 
upon partial knowledge of the recipient.  In a perfect, infinite world, 
details could and should always be transferred.  In the real world, time
and 
computational constraints means that trade-offs need to occur.  This is 
where the essence of intelligence comes into play -- determining which of 
the trade-offs to take to get optimal perfomance (a.k.a. domain competence)




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] Re: Meaning, communication and understanding

2008-10-19 Thread Dr. Matthias Heger
The language model does not need interaction with the environment when the
language model is already complete which is possible for formal languages
but nearly impossible for natural language. That is the reason why formal
language need much less cost.

If the language must be learned then things are completely different and you
are right that the interaction with the environment is necessary to learn L.

But in any case there is a complete distinction between D and L. The brain
never sends entities of D to its output region but it sends entities of L.
Therefore there must be a strict separation between language model and D.

- Matthias


Vladimir Nesov wrote

I think that this model is overly simplistic, overemphasizing an
artificial divide between domains within AI's cognition (L and D), and
externalizing communication domain from the core of AI. Both world
model and language model support interaction with environment, there
is no clear cognitive distinction between them. As a given,
interaction happens at the narrow I/O interface, and anything else is
a design decision for a specific AI (even invariability of I/O is, a
simplifying assumption that complicates semantics of time and more
radical self-improvement). Sufficiently flexible cognitive algorithm
should be able to integrate facts about any domain, becoming able to
generate appropriate behavior in corresponding contexts.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: AW: AW: [agi] Re: Defining AGI

2008-10-19 Thread Dr. Matthias Heger
Marc Walser wrote:


*Any* human who can understand language beyond a certain point (say, that of

a slightly sub-average human IQ) can easily be taught to be a good scientist

if they are willing to play along.  Science is a rote process that can be 
learned and executed by anyone -- as long as their beliefs and biases don't 
get in the way.



This is just an opinion and I  strongly disagree with your opinion.
Obviously you overestimate language understanding a lot.



This is a bit of disingenuous side-track that I feel that I must address. 
When people say natural language, the important features are extensibility

and ambiguity.  If you can handle one extensible and ambiguous language, you

should have the capabilities to handle all of them.  It's yet another 
definition of GI-complete.  Just look at it as yet another example of 
dealing competently with ambiguous and incomplete data (which is, at root, 
all that intelligence is).


You use your personal definition of natural language. I don't think that
human's are intelligent because they use an ambiguous language. They also
would be intelligent if their language would not suffer from ambiguities.


One thought module, two translation modules -- except that all the 
translation modules really are is label appliers and grammar re-arrangers. 
The heavy lifting is all in the thought module.  The problem is that you are

claiming that language lies entirely in the translation modules while I'm 
arguing that a large percentage of it is in the thought module.  The fact 
that the translation module has to go to the thought module for 
disambiguation and interpretation (and numerous other things) should make it

quite clear that language is *not* simply translation.


It is still just translation.



Further, if you read Pinker's book, you will find that languages have a lot 
more in common than you would expect if language truly were independent of 
and separate from thought (as you are claiming).  Language is built on top 
of the thinking/cognitive architecture (not beside it and not independent of

it) and could not exist without it.  That is why language is AGI-complete. 
Language also gives an excellent window into many of the features of that 
cognitive architecture and determining what is necessary for language also 
determine what is in that cognitive architecture.  Another excellent window 
is how humans perform moral judgments (try reading Marc Hauser -- either his

numerous scientific papers or the excellent Moral Minds).  Or, yet another, 
is examining the structure of human biases.


There are also visual thoughts. You can imagine objects moving. The
principle is the same as with thoughts you perceive in your language: There
is an internal representation of patterns which is completely hidden for
your consciousness. The brain compresses and translates your visual thoughts
and routes the results to its own visual input regions. 

As long as there is no real evidence against the model that thoughts are
separated from the way I perceive thoughts (e.g. by language )I do not see
any reason to change my opinion.

- Matthias



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] Words vs Concepts [ex Defining AGI]

2008-10-19 Thread Dr. Matthias Heger
I have given the example with the dog next to a tree.
There is an ambiguity. It can be resolved because the pattern for dog has a
stronger  relation to the pattern for angry than it is the case for the
pattern of tree.

You don't have to manipulate any patterns and can do the translation.

- Matthias

Marc Walser wrote:

How do you communicate something for which you have no established 
communications protocol?  If you can answer that, you have solved the 
natural language problem.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] Words vs Concepts [ex Defining AGI]

2008-10-19 Thread Dr. Matthias Heger

Absolutely.  We are not aware of most of our assumptions that are based in 
our common heritage, culture, and embodiment.  But an external observer 
could easily notice them and tease out an awful lot of information about us 
by doing so.


You do not understand what I mean.
There will be lot of implementation details (e.g. temporary variables )
within the patterns which will never be send by linguistic messages.


I disagree with a complete distinction between D and L.  L is a very small
fraction of D translated for transmission.  However, instead of arguing that
there must be a strict separation between language model and D, I would
argue that the more similar the two could be (i.e. the less translation from
D to L) the better.  Analyzing L in that case could tell you more about D
than you might think (which is what Pinker and Hauser argue).  It's like
looking at data to determine an underlying cause for a phenomenon.  Even
noticing what does and does not vary (and what covaries) tells you a lot
about the underlying cause (D).


This is just an assumption of you. No facts. My opinion remains: D and L are
separated.



How do you go from a formal language to a competent description of a messy,
ambiguous, data-deficient world?  *That* is the natural language question.


Any algorithm in your computer is written in a formal well defined language.
If you agree that AGI is possible with current programming languages then
you have to agree that the ambiguous, data-deficient world can be managed by
formal languages.



What happens if I say that language extensibility is exactly analogous to
learning which is exactly analogous to internal model improvement?


What happens? I disagree.


So translation is a pattern manipulation where the result isn't stored?


The result isn't stored in D


The domain of mathematics is complete and unambiguous.  A mathematics AI is
not a GI in my book.  It won't generalize to the real world until it handles
incompleteness and ambiguity (which is my objection to your main analogy).


If you say mathematics is not GI then the following must be true for you:
The universe cannot be modeled by mathematics.
I disagree.


The communication protocol needs to be extensible to handle output after
learning or transition into a new domain.  How do you ground new concepts?
More importantly, it needs to be extensible to support teaching the AGI.  As
I keep saying, how are you going to make your communication protocol
extensible?  Real GENERAL intelligence has EVERYTHING to do with
extensibility.


For mathematics you just need a few axioms. There are an infinite number of
expressions which can be written with a final set of symbols and a finite
formal language.

But extensibility is no crucial point in this discussion at all. You can
have extensibility with a strict separation of D and L. For first AGI with
mathematics I would hardcode an algorithm which manages an open list of
axioms and definitions as a language interface.


I keep pointing out that your model separating communication and database
updating depends upon a fully specified model and does not tolerate
ambiguity (i.e. it lacks extensibility and doesn't handle ambiguity).  You
continue not to answer these points.


Once again:
The separation of communication and database updating does not contradict
extensibility and ambiguity. Language data and domain data can be strictly
separated.
I can update the language database without communicating (e.g. just by
changing the hard disk) or with communicating. The main point is that the
model *D* needs not to be changed during communicating. Furthermore,
language extension would be a nice feature but it is not necessary.

The model D needs not to be fully specified at all.
If the model L is formal and without ambiguities this does not imply at all
that problems with ambiguities cannot be handled.













---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] Re: Defining AGI

2008-10-18 Thread Dr. Matthias Heger
I think embodied linguistic experience could be *useful* for an AGI to do
mathematics. The reason for this is that creativity comes from usage of huge
knowledge and experiences in different domains.

 

But on the other hand I don't think embodied experience is necessary. It
could be even have some disadvantages. For example, we can think in 3d
spaces much better than in spaces of dimension n. But for science today,
3d-mathematics is less needed than mathematics of n-dimensional spaces. 

 

An AGI which gets nothing else than pure mathematical experiences in
arbitrary mathematical spaces which we give the AGI by our mathematical
definitions, could even have an important advantage against an AGI which is
full of 3d patterns because of its 3d embodied experiences.

 

I suppose the 3D vs. nD subject is just one of many examples one could find.
But the main reason against embodied linguistic AGI for first generation AGI
is the amount of work necessary to build it. I do not think that the
relation of utility vs. costs is positive.

 

- Matthias

 

 

 


Ben Goertzel wrote:


That is not clear -- no human has learned math that way.

We learn math via a combination of math, human language, and physical
metaphors...

And, the specific region of math-space that humans have explored, is
strongly biased toward those kinds of math that can be understood via
analogy to physical and linguistic experience

I suggest that the best way for humans to teach an AGI math is via  first
giving that AGI embodied, linguistic experience ;-)

See Lakoff and Nunez, Where Mathematics Comes From, for related arguments.

-- Ben G




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] Re: Defining AGI

2008-10-18 Thread Dr. Matthias Heger
If you don't like mirror neurons, forget them. They are not necessary for my
argument.


Trent wrote

Oh you just hit my other annoyance.

How does that work?

Mirror neurons

IT TELLS US NOTHING.

Trent





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] Re: Defining AGI.. PS

2008-10-18 Thread Dr. Matthias Heger
I do not agree that body mapping is necessary for general intelligence. But
this would be one of the easiest problems today.
In the area of mapping the body onto another (artificial) body, computers
are already very smart:

See the video on this page:
http://www.image-metrics.com/

-Matthias


Mike Tintner wrote:

Trent,

I should have added that our brain and body, by observing the mere 
shape/outline of others bodies as in Matisse's Dancers, can tell not only 
how to *shape* our own outline,  but how to dispose of our *whole body*  -

we transpose/translate (or flesh out) a static two-dimensional body shape 
into an extremely complex set of instructions as to how to position and move

our entire, *solid* body with all its immensely complex musculature. It's an

awesomely detailed process, mechanically, when you analyse it.

(It reminds me of an observation by Vlad, long ago, about how efficient some

computational coding can be. That painting of the Dancers surely must 
represent a vastly more efficient form of coding than anything digital or 
rational languages can achieve. So much info has been packed into such a 
brief outline. Never was so much told by so little? The same is true of 
artistic drawing generally).

P.S. Perhaps the best summary of all this is that general intelligence 
depends on body mapping  -  fluidly and physically/embodied-ly mapping our

body onto others (as totally distinct frommapping structures of symbols 
onto each other). Not worth discussing, Ben?





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] Re: Defining AGI

2008-10-18 Thread Dr. Matthias Heger
There is no big depth in the language. There is only depth in the
information (i.e. patterns) which is transferred using the language.

Human language seems so magical because it is so ambiguous at a first view.
And just these ambiguities show that my model of transferred patterns is
right.

An example:
Yesterday I saw a big dog next to a tree. It seemed to be very angry.

Who is it? The tree or the dog? I think, most people would answer: the
dog.
The reason why we can resolve this ambiguity is that people have similar
patterns each for trees and dogs and angry. 

So when you assume the model of transferred patterns you see that there is
no big intelligence necessary to understand language.
The intelligence lies only in the brain's representation of the dog and the
tree and the emotion angry . But these representations depend hardly on
language. 


Only if you use a real time brain scanner then the process of the creation
of language is very interesting. Because if you say: I feel happy then
there must be a flow of information from your internal representation of
happiness to the spoken sentence. Hopefully there will be the day where we
can backtrack this process in order to understand how patterns in the brain
are implemented.

But a black box language understanding itself will show you only a few hints
about internal representations just as understanding XML-strings coming from
computers show only a few hints about their databases and algorithms.

Therefore I think, the ways towards AGI mainly by studying language
understanding will be very long and possibly always go in a dead end.



Andi [EMAIL PROTECTED] wrote


And what's up with language as just a communication protocol?  I'm right
now going through the Teaching Company class on linguistics, and I'm kind
of surprised by the many interacting layers and depth in language.  And
how there is intricate contraint satisfaction going on between all the
levels.  It's not a simple thing in any way, and it sure looks AI-complete
to me.  I mean, it could just come down to intelligence really just being
a system of communication between modules.
andi


Ben wrote:
 I am well aware that building even *virtual* embodiment (in simulated
 worlds) is hard

 However, creating human-level AGI is **so** hard that doing other hard
 things in order to make the AGI task a bit easier, seems to make sense!!

 One of the things the OpenCog framework hopes to offer AGI developers is a
 relatively easy way to hook their proto-AGI systems up to virtual bodies
 ...
 saving them of doing the software integration work...

 Integration of robot simulators with virtual worlds, as I've been
 advocating, would make this sort of approach even more powerful...

 -- Ben G

 On Sat, Oct 18, 2008 at 3:45 AM, Dr. Matthias Heger [EMAIL PROTECTED]
 wrote:

  I think embodied linguistic experience could be **useful** for an AGI
 to
 do mathematics. The reason for this is that creativity comes from usage
 of
 huge knowledge and experiences in different domains.



 But on the other hand I don't think embodied experience is necessary. It
 could be even have some disadvantages. For example, we can think in 3d
 spaces much better than in spaces of dimension n. But for science today,
 3d-mathematics is less needed than mathematics of n-dimensional spaces.



 An AGI which gets nothing else than pure mathematical experiences in
 arbitrary mathematical spaces which we give the AGI by our mathematical
 definitions, could even have an important advantage against an AGI which
 is
 full of 3d patterns because of its 3d embodied experiences.



 I suppose the 3D vs. nD subject is just one of many examples one could
 find. But the main reason against embodied linguistic AGI for first
 generation AGI  is the amount of work necessary to build it. I do not
 think
 that the relation of utility vs. costs is positive.



 - Matthias







 
 Ben Goertzel wrote:


 That is not clear -- no human has learned math that way.

 We learn math via a combination of math, human language, and physical
 metaphors...

 And, the specific region of math-space that humans have explored, is
 strongly biased toward those kinds of math that can be understood via
 analogy to physical and linguistic experience

 I suggest that the best way for humans to teach an AGI math is via
 first
 giving that AGI embodied, linguistic experience ;-)

 See Lakoff and Nunez, Where Mathematics Comes From, for related
 arguments.

 -- Ben G
--
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ |
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 [EMAIL PROTECTED]

 Nothing will ever be attempted if all possible objections must be first
 overcome   - Dr Samuel Johnson




---
agi
Archives

AW: [agi] Re: Defining AGI.. PS

2008-10-18 Thread Dr. Matthias Heger
I think here you can see that automated mapping between different faces is
possible and the computer can smoothly morph between them. I think, the
performance is much better than the imagination of humans can be.

http://de.youtube.com/watch?v=nice6NYb_WA

-Matthias



Mike Tintner wrote


Matthias:

I do not agree that body mapping is necessary for general intelligence. But
 this would be one of the easiest problems today.
 In the area of mapping the body onto another (artificial) body, computers
 are already very smart:

 See the video on this page:
 http://www.image-metrics.com/


Matthias,

See my reply to David. This is typical of the free-form transformations 
that computers can achieve - and, I grant you,  is v. impressive. (I really 
think there should be a general book celebrating some of the recent 
achievements of geometry in animation - is there?).

But it is NOT mapping one body onto another. It is working only with one 
body, and transforming it in highly sophisticated operations.

Computer software can't map two totally different bodies onto each other - 
can't perceive the likeness between the map of Italy and a boot. And it 
can't usually perceive the same body or face in different physical or facial

forms - can't tell that two faces with v. different facial/emotional 
expressions belong to the same person, eg Madonna, can it?




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: AW: [agi] Re: Defining AGI

2008-10-18 Thread Dr. Matthias Heger
If you can build a system which understands human language you are still far
away from AGI.
Being able to understand the language of someone else does no way imply to
have the same intelligence. I think there were many people who understood
the language of Einstein but they were not able to create the same.

Therefore it is better to build a system which is able to create things
instead of only being able to understand things.

Language understanding is easy for nearly every little child.
But mathematics is hard for most people and for computers today.

If you say mathematics is too narrow then this implies either the world
cannot be modeled by mathematics or the world itself is too narrow.

- Matthias



Andi wrote:

Matthias wrote:

 There is no big depth in the language. There is only depth in the
 information (i.e. patterns) which is transferred using the language.

This is a claim with which I obviously disagree.  I imagine linguists
would have trouble with it, as well.

And goes on to conclude:
 Therefore I think, the ways towards AGI mainly by studying language
 understanding will be very long and possibly always go in a dead end.

It seems similar to my point, too.  That's really what I see as a
definition of AI-complete as well.  If you had something that could
understand language, it would have to be able to do everything that a full
intelligence would do.  It seems there is a claim here that one could have
something that understands language but doesn't have anything else
underneath it.  Or maybe that language could just be something separated
away from some real intelligence lying underneath, and so studying just
that would be limiting.  And that is a possibility.  There are certainly
specific language modules that people have to assist them with their use
of language, but it does seem like intelligence is more integrated with
it.

And somebody suggested that it sounds like Matthias has some kind of
mentalese hidden down in there.  That spoken and written language is not
interesting because it is just a rearrangement of whatever internal
representation system we have.  That is a fairly bold claim, and has
logical problems like a homunculus.  It is natural for a computer person
to think that mental things can be modifiable and transmittable strings,
but it would be hard to see how that would work with people.

Also, I get a whole sense that Matthias is thinking there might be some
small general domain where we can find a shortcut to AGI.  No way. 
Natural language will be a long, hard road.  Any path going to a general
intelligence will be a long, hard road.  I would guess.  It still happens
regularly that people will say they're cooking up the special sauce, but I
have seen that way too many times.

Maybe I'm being too negative.  Ben is trying to push this list to being
more positive with discussions about successful areas of development.  It
certainly would be nice to have some domains where we can explore general
mechanism.  I guess the problem a see with just math as a domain is that
the material could get too narrow a focus.  If we want generality in
intelligence, I think it is helpful to be able to have a possibility that
some bit of knowledge or skill from one domain could be tried in a
different area, and it is my claim that general language use is one of the
few areas where that happens.


andi






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] Re: Defining AGI.. PS

2008-10-18 Thread Dr. Matthias Heger
I think it does  involve being confronted with two different faces or 
objects randomly chosen/positioned and finding/recognizing the similarities 
between them.

If you have watched the video carefully then you have heard that they have
spoken from automated algorithms which do the matching.

On an initial macroscopic scale there is some human hint necessary but on a
microscopic scale it is done by software alone and after the initial
matching, the complete morphing is done even on macroscopic scales.

Computer generated morphing between complete different objects as it is the
case in your picture is no problem for computers after an initial matching
of some points of the first and the last picture is made by humans.
It is a common special effect in many science fiction movies. 

In the morphing video I have given, there were no manual initial matching of
points necessary. Only the macroscopic position of two faces had to be
adjusted manually.

- Matthias Heger


Mike Tintner wrote:


Matthias: I think here you can see that automated mapping between different 
faces is
possible and the computer can smoothly morph between them. I think, the
performance is much better than the imagination of humans can be.

http://de.youtube.com/watch?v=nice6NYb_WA

Matthias,

Perhaps we're having difficulties communicating in words about a highly 
visual subject. The above involves morphing systematically from a single 
face.  It does not involve being confronted with two different faces or 
objects randomly chosen/positioned and finding/recognizing the similarities 
between them . My God, if it did, computers would have no problems with 
visual object (or facial) recognition.

Of course, morphing operations by computers are better, i.e. immensely more 
detailed and accurate,  than anything the human mind can achieve - better 
at, if you like, the mechanical *implementation* of imagination. (But bear 
in mind that it was the imagination of the programmer that decided in the 
above software, which face should be transformed into which face. The 
software could not by itself choose or create a totally new face to add to 
its repertoire without guidance).

What rational computers can't do is find similarities between disparate, 
irregular objects - via fluid transformation - the essence of imagination. I

repeat - computers can't do this -

http://www.bearskinrug.co.uk/_articles/2005/09/16/doodle/hero.jpg

and therein lies the central mechanism of analogy and metaphor.

Rather than simply objecting to this, the focus should be on *how* to endow 
computers with imagination. 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] Re: Defining AGI.. PS

2008-10-18 Thread Dr. Matthias Heger
After the first positioning there is no point to point matching at all.

The main intelligence comes from the knowledge base of hundreds of 3d
scanned faces.
This is a huge vector space. And it is no easy task to match a given picture
of a face with a vector(=face) within the vector space.

The programmer uses the average face of this vector space and not a special
one.

You claim the program cannot match a face with a hairlip. Can you prove
this?

You underestimate the difficulties which are solved by the program and you
overestimate the act of the first manual positioning.

The mapping and morphing program is no AGI but it is AI and it has the great
advantage over AGI that it already works.

Your argumentation is the most common one against AI:
Say on the one hand: all things computers can do need no intelligence.
Say on the other hand: all things computer cannot do but humans can need
intelligence which computers never will have.

I am sure that the space where you can survive with this opinion soon will
become smaller and smaller ;-)

-Matthias




Mike Tintner wrote

Matthias,

When a programmer (or cameraman)  macroscopic(ally) positions two faces - 
adjusting them manually so that they are capable of precise point-to-point

matching,  that proceeds from an initial act of  visual object recognition -

and indeed imagination, as I have defined it.

He will have taken two originally disparate faces moving through many 
different not-easily-comparable positions, and recognized their 
compatibility - by, I would argue, a process of fluid transformation.

The programmer accordingly won't put any old two faces together - he won't 
put one person with a harelip  and/or one eye together with a regular face, 
He won't put a woman with hair over her eyes, together with one whose eyes 
are unobscured - or one with heavy make-up with one who is clear - or, just 
possibly, one with cosmetic surgery together with a natural face.  The human

brain is capable of recognizing the similarities and differences between all

such faces - the program isn't.

(I think you're being a bit difficult here - I don't think many others - 
incl. say. Ben - would try to ascribe the powers to these particular 
programs that you are doing).




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] NEWS: Scientist develops programme to understand alien languages

2008-10-17 Thread Dr. Matthias Heger
But even understanding an alien language would not necessarily imply to
understand how this intelligence work ;-) Furthermore, understanding the
language of an intelligent species is not necessary and is also not
sufficient to have the same intelligence. In fact language is only a
protocol to transfer information.

-Ursprüngliche Nachricht-
Von: Pei Wang [mailto:[EMAIL PROTECTED] 
Gesendet: Freitag, 17. Oktober 2008 13:56
An: agi@v2.listbox.com
Betreff: [agi] NEWS: Scientist develops programme to understand alien
languages

... even an alien language far removed from any on Earth is likely to
have recognisable patterns that could help reveal how intelligent the
life forms are.

the news:
http://www.telegraph.co.uk/earth/main.jhtml?view=DETAILSgrid=xml=/earth/20
08/10/15/scialien115.xml

the researcher: http://www.lmu.ac.uk/ies/comp/staff/jelliott/jre.htm


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] Re: Defining AGI

2008-10-17 Thread Dr. Matthias Heger
There is a difference *why* we use language and *how language
(communication) works*.
The ultimate goal why human used language is to enhance the probability to
survive.
If someone has found something to eat he could tell it his group. Further,
language is useful for knowledge transfer from one generation to the other
generation. Another reason for language is to communicate the intentions as
you have already said.

But for all these reasons language works probably indeed as protocol to
transfer patterns from one brain into another brain.

Even if a baby only cries, we can assume that it has some patterns of pain
in his brain. If the mother hears her baby crying she we feel painful too to
a certain degree (mirror neurons). 

And I am pretty sure that if you read or hear the word *apple* your brain
activates the pattern for this object.

Of course language is no tool to transfer a complete brain dump from on
brain to another brain. But if you communicate your goals to another person
then the other person creates patterns in its brain that represent and
probably resemble the patterns of your own goal-patterns. This does not
imply that the other person adopts your goal. But this is no contradiction
to my argument. 

If a computer transfers a fragment of its database via XML-language to
another computer, then the other computer can reconstruct the original data
structure by the XML-string. This does not necessarily imply that this data
fragment is used in the second computer by the same algorithms as in the
first computer.

So even if we have some same patterns after communication then we will use
these patterns differently and this is the reason why goals from one person
won't be adopted by the other person.

- Matthias


Trent  Waddington [mailto:[EMAIL PROTECTED] wrote

On Fri, Oct 17, 2008 at 12:32 PM, Dr. Matthias Heger [EMAIL PROTECTED]
wrote:
 In my opinion language itself is no real domain for intelligence at all.
 Language is just a communication protocol. You have patterns of a certain
 domain in your brain you have to translate your internal pattern
 representation to a sequence of words in order to communicate your
patterns
 to another person.

I've seen this kind of comment on this list and elsewhere before.. and
it's typically just accepted without question.

Personally, I think it's pretty far from the truth.

Language does not exist to get the ideas in my head into yours.

Language exists so I can get you to do what I want.

The baby cries not to tell you that it is unhappy with the world, but
to get you to do something about it.

As such, squeaking out noises is just the same as moving a limb.  The
machinery that controls it is not trying to translate mentalese into
english so that it can be transformed back into mentalese in the
recipient's head with the greatest accuracy.  Sure, sometimes that'd
be nice, but most the time the goal is simply to make you think want I
want you to think so you'll do my bidding.

That's why having a theory of mind is all tied up with language.  If I
don't know what you're thinking then I can't know what your intentions
are and if I don't know what your intentions are then I don't know if
I need to change them so you have intentions that serve my goals.

Even if we had direct mentalese transfer between two AGIs (and in the
neuromancer universe, between humans) we'd still need all that
interesting stuff that we study language for.. because if I brain dump
my current goals to you then it in no way implies that you're going to
alter your goals to match mine.

Trent





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: AW: Defining AGI (was Re: AW: [agi] META: A possible re-focusing of this list)

2008-10-16 Thread Dr. Matthias Heger
In my opinion, the domain of software development is far too ambitious for
the first AGI.
Software development is not a closed domain. The AGI will need at least
knowledge about the domain of the problems for which the AGI shall write a
program.

The English interface is nice but today it is just a dream. An English
interface is not needed for a proof of concept for first AGI. So why to make
the problem harder as it already is? 

The domain of mathematics is closed but can be extended by adding more and
more definitions and axioms which are very compact. The interface could be
very simple. And thus you can mainly concentrate to build the kernel AGI
algorithm.

-Matthias


Trent Waddington [mailto:[EMAIL PROTECTED]  wrote

Yes, I'd want it to have an English interface.. because I'd also
expect it to be able to read comments and commit messages in the
revision control and the developer mailing lists, etc.  Open Source
programmers (and testers!) are basically disembodied but they get
along just fine.

I'd also expect it to be able to see windows and icons and all those
other things that are part of software these days.  I wouldn't expect
it to be able to test a program and say whether it was working
correctly or not if it couldn't even see it running and interact with
it.  Of course, if you're testing command line apps you could get away
with a much simpler sensor.

Trent




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] Re: Defining AGI

2008-10-16 Thread Dr. Matthias Heger
In theorem proving computers are weak too compared to performance of good
mathematicians.
The domain of mathematics is well understood. But we do not understand how
we manage to solve problems within this domain. 

In my opinion language itself is no real domain for intelligence at all.
Language is just a communication protocol. You have patterns of a certain
domain in your brain you have to translate your internal pattern
representation to a sequence of words in order to communicate your patterns
to another person.

It is similar to the language of XML. The computer has a database und
algorithms which work with its data. But XML is just a communication
protocol. The computer translates its internal data representation into an
XML-string in order to send it to another place (possibly another computer).

Do you think, you can learn how databases and the algorithms work by
learning to understand the xml-strings? I don't think so. And similar I
don't think that it is an efficient way to build and AGI by understanding
human intelligence from the perspective of language understanding.

Of course, language understanding gives us some hints about data
representation within the brain:

Here is a simple example of language understanding by two sentences:
Yesterday I went into a garden with apple trees. The fruits tasted very
well.

We know that the word fruits represent the apples of the apple trees in the
garden.
The following could be a model for this phenomenon:
If the listener receives the first sentence, some patterns are activated in
his brain. One of these patterns is the pattern of apples. Then, the person
receives the second sentence which activates the pattern of fruits. Then
somehow the person creates the links between the patterns of the first
sentence and the patterns of the second sentence.

This example suggests we can learn the nature of human intelligence by the
domain of language understanding. But this is probably a misconception as I
hopefully have pointed out by my example of XML and the databases and
algorithms of computers.

The real stuff of human intelligence is behind the language understanding
and not the language understanding itself.
Language understanding is just an add-on. In order to create AGI as soon as
possible we should choose a domain which is as simple as possible but still
sufficient for AGI. Following this approach we should drop language
understanding for the first AGI.

As Ben has pointed out language understanding is useful to teach AGI. But if
we use the domain of mathematics we can teach AGI by formal expressions more
easily and we understand these expressions as well.

- Matthias



Matt Mahony wrote:

Would you consider Gelernter's theorem prover [1] an example of AGI? It
proved geometry theorems by drawing diagrams that helped it heuristically
trim the search space.

The problem with well understood problems is that they are well understood.
Thus, there has been little progress since the pioneering work in AI done
before 1965.

Computers can do many tasks better than humans. The areas where computers
are weak are language, vision, and motor coordination. I think a lot of the
intuition that mathematicians use in solving difficult proofs is really a
language modeling problem. Mathematicians can read proofs done by others and
apply similar techniques.

Likewise, writing software has to be understood in terms of natural language
learning and modeling. A programming language is a compromise between what
humans can understand and what machines can understand. Humans learn C++
grammar and semantics by induction, from lots of examples. Machines know C++
from an explicit specification.

Natural language is poorly understood, which is exactly why we need to study
it.

1. Gelernter, H., Realization of a Geometry-Theorem Proving Machine,
Proceedings of an International Conference on Information Processing, Paris:
UNESCO House, pp. 273-282, 1959.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] META: A possible re-focusing of this list

2008-10-15 Thread Dr. Matthias Heger
If we do not agree how to define AGI, intelligence, creativity etc. we
cannot discuss the question how to build it.

And even if we all agree in these  questions there is the other question for
which domain it is useful to build the first AGI.

 

AGi is the ability to solve different problems in different domains.

This definition is ok, but it is useless to answer the question how to build
it.

 

We need a difficult but well understood domain which is AGI-complete and as
small as possible but not too small to avoid the  risk to build only AI
instead of AGI.

 

Only if we have a well defined problem to solve then we can reduce the
philosophical questions. 

In my opinion the domain of proofs in mathematics is AGI-complete. The
complexity is very high. The problem is well defined. The domain is not
random. Threfore intelligence is useful in this domain. There are easy and
difficult regularities, e.g. 1+2+3+.+n = 0.5*n*(n+1) , pi =3.14159. The
crucial ability to recognize patterns in this domain in order to handle the
exponential complexity is important to create mathematical proofs.
Mathematics is no artificial domain as we all know that mathematics applies
to many real word problems including physics.

 

Probably, most people of this list think that mathematics is too small for
AGI. But if you say that embodiment is not necessary for AGI, then better
drop it.  Embodiment and natural language is of course interesting and
useful as Ben has pointed out.  But I think it creates too many  additional
problems and costs. Why we should spend time for things which we do not
regard as necessary for AGI? Choose a domain which is as small as possible
but still sufficient for AGI.

 

So I would propose the domain of mathematics. But I'm sure, everyone has his
own preferred domain for AGI. And this is only one of the reasons, why we do
not move forward to the engineering level of AGI. 

 

Von: Ben Goertzel [mailto:[EMAIL PROTECTED] 
Gesendet: Mittwoch, 15. Oktober 2008 17:01
An: agi@v2.listbox.com
Betreff: [agi] META: A possible re-focusing of this list

 


Hi all,

I have been thinking a bit about the nature of conversations on this list.

It seems to me there are two types of conversations here:

1)
Discussions of how to design or engineer AGI systems, using current
computers, according to designs that can feasibly be implemented by
moderately-sized groups of people

2)
Discussions about whether the above is even possible -- or whether it is
impossible because of weird physics, or poorly-defined special
characteristics of human creativity, or the so-called complex systems
problem, or because AGI intrinsically requires billions of people and
quadrillions of dollars, or whatever

Personally I am pretty bored with all the conversations of type 2.

It's not that I consider them useless discussions in a grand sense ...
certainly, they are valid topics for intellectual inquiry.   

But, to do anything real, you have to make **some** decisions about what
approach to take, and I've decided long ago to take an approach of trying to
engineer an AGI system.

Now, if someone had a solid argument as to why engineering an AGI system is
impossible, that would be important.  But that never seems to be the case.
Rather, what we hear are long discussions of peoples' intuitions and
opinions in this regard.  People are welcome to their own intuitions and
opinions, but I get really bored scanning through all these intuitions about
why AGI is impossible.

One possibility would be to more narrowly focus this list, specifically on
**how to make AGI work**.

If this re-focusing were done, then philosophical arguments about the
impossibility of engineering AGI in the near term would be judged **off
topic** by definition of the list purpose.

Potentially, there could be another list, something like agi-philosophy,
devoted to philosophical and weird-physics and other discussions about
whether AGI is possible or not.  I am not sure whether I feel like running
that other list ... and even if I ran it, I might not bother to read it very
often.  I'm interested in new, substantial ideas related to the in-principle
possibility of AGI, but not interested at all in endless philosophical
arguments over various peoples' intuitions in this regard.

One fear I have is that people who are actually interested in building AGI,
could be scared away from this list because of the large volume of anti-AGI
philosophical discussion.   Which, I add, almost never has any new content,
and mainly just repeats well-known anti-AGI arguments (Penrose-like physics
arguments ... mind is too complex to engineer, it has to be evolved ...
no one has built an AGI yet therefore it will never be done ... etc.)

What are your thoughts on this?

-- Ben








---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 

AW: Defining AGI (was Re: AW: [agi] META: A possible re-focusing of this list)

2008-10-15 Thread Dr. Matthias Heger
Text compression would be AGI-complete but I think it is still too big.
The problem is the source of knowledge. If you restrict to mathematical
expressions then the amount of data necessary to teach the AGI is probably
much smaller. In fact AGI could teach itself using a current theorem prover.

-Matthias


Matt Mahoney wrote


I have argued that text compression is just such a problem. Compressing
natural language dialogs implies passing the Turing test. Compressing text
containing mathematical expressions implies solving those expressions.
Compression also allows for precise measurements of progress.

Text compression is not completely general. It tests language, but not
vision or embodiment. Image compression is a poor test for vision because
any progress in modeling high level visual features is overwhelmed by
incompressible low level noise. Text does not have this problem.

Hutter proved that compression is a completely general solution in the AIXI
model. (The best predictor of the environment is the shortest model that is
consistent with observation so far). However, this may not be very useful as
a test, because it would require testing over a random distribution of
environmental models rather than problems of interest to people such as
language.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: AW: Defining AGI (was Re: AW: [agi] META: A possible re-focusing of this list)

2008-10-15 Thread Dr. Matthias Heger

My intention is not to define intelligence. I choose mathematics just as a
test domain for first AGI algorithms.

The reasons:
1. The domain is well understood.
2. The domain has regularities. Therefore a high intelligent algorithm has a
chance to outperform less intelligent algorithms
3. The domain can be modeled easily by software.
4. The domain is non-trivial. Current algorithms fail for hard problems in
this domain because of the exponential growing complexity.
5. The domain allows a comparison with performance of human intelligence.


To decide, whether you have an AGI or not, you also have to evaluate the
proofs and not only the fact whether it could prove something or not.

For example, the formula 1+2+3+...+n = 0.5*n*(n+1) can be proven by seeing a
pattern:

1 +  2+   3   + ... + n-2   +  n-1   +   n +

n + (n-1) + (n-2) + ..  +  3+   2+   1 =

(n+1) + (n+1) + (n+1) + ... +(n+1)  +  (n+1) +  (n+1)

= n*(n+1)

AGI will differ from AI by often using such pattern based proofs.
An AGI based theorem prover represents expressions by patterns. When it
comes to prove a certain formula, patterns of known expressions and rules
become active or inactive. 

-Matthias

Matt Mahoney wrote:

Goedel and Turing showed that theorem proving is equivalent to solving the
halting problem. So a simple measure of intelligence might be to count the
number of programs that can be decided. But where does that get us? Either
way (as as set of axioms, or a description of a universal Turing machine),
the problem is algorithmically simple to describe. Therefore (by AIXI) any
solution will be algorithmically simple too.

If you defined AGI this way, what would be your approach?

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


AW: [agi] New Scientist: Why nature can't be reduced to mathematical laws

2008-10-07 Thread Dr. Matthias Heger

Mike Tintner wrote,

You don't seem to understand creative/emergent problems (and I find this 
certainly not universal, but v. common here).

If your chess-playing AGI is to tackle a creative/emergent  problem (at a 
fairly minor level) re chess - it would have to be something like: find a 
new way for chess pieces to move - and therefore develop a new form of 
chess   (without any preparation other than some knowledge about different 
rules and how different pieces in different games move).  Or something like 
get your opponent to take back his move before he removes his hand from the

piece  - where some use of psychology, say, might be appropriate rather 
than anything to do directly with chess itself.


In your example you leave the domain of chess rules.
There *are* already emergent problems just within the domain of chess.
For example I could see, that my chess program tends to move the queen too
early.
Or it tends to attack the other side too late and so on. The programmer will
then have the difficult
task to change heuristics and parameters of the program to get the right
emergent behavior.
But this is possible.


I think you suppose that creativity is something very strange and mythical
and cannot be done by machines.
I don't think so. Creativity is mainly the ability to use and combine *all*
the pieces of knowledge you have.
The creativity of humans seems to be so mythical just because the knowledge
data base is so huge. Remember how many bits your brain receives every
second for many years!
A chess program has only knowledge of chess. And that's the main reason it
just can do chess. But within chess, it can be creative.

You see an inherent algorithmic problem to obtain creativity but it is in
fact just mainly a problem of knowledge.

So has the chess program the same creativity as a human if you are fair and
restrict just to the domain and knowledge of chess?
The answer is yes! Very good experts of chess often say that a certain move
of a chess program is creative, spirited, clever and so on.

- Matthias



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


AW: AW: [agi] I Can't Be In Two Places At Once.

2008-10-07 Thread Dr. Matthias Heger
The quantum level biases would be more general and more correct as it is the
case 
with quantum physics and classical physics.

The reasons why humans do not have modern physics biases for space and time:
There is no relevant advantage to survive when you have such biases
and probably the costs of necessary resources to obtain any advantage are
far too high
for a biological system.

But with future AGI (not the first level), these objections won't hold.
We don't need AGI do help us with middle level physics. We will need AGI
to make progress in worlds, were our innate intuitions do not hold, namely
nanotechnology, inner cellular biology.
So there would be an advantage for quantum biases and because of this
advantage the quantum biases would probably more often used than non-quantum
biases.

And what about the costs of resources? We could imagine an AGI brain which
has the size of a continent.
Of course not for the first level AGI. But I am sure, that future AGIs will
have quantum biases.

But as Ben said: First we should build AGI with biases we have and
understand.

And the main 3 problems of AGI should be solved first:
How to obtain knowledge, how to represent knowledge and how to use knowledge
to solve different problems in different domains.





Charles Hixson wrote:

I feel that an AI with quantum level biases would be less general. It 
would be drastically handicapped when dealing with the middle level, 
which is where most of living is centered. Certainly an AGI should have 
modules which can more or less directly handle quantum events, but I 
would predict that those would not be as heavily used as the ones that 
deal with the mid level. We (usually) use temperature rather then 
molecule speeds for very good reasons.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


AW: AW: [agi] I Can't Be In Two Places At Once.

2008-10-06 Thread Dr. Matthias Heger
Good points. I would like to add a further point:

Human language is a sequence of words which is used to transfer patterns of
one brain into another brain.

When we have an AGI which understands and speaks language, then for the
first time there would be an exchange of patterns between an artificial
brain and a human brain.

So human language is not only useful to teach the AGI some stuff. We also
will have an easy access to the toplevel patterns of the AGI when it speaks
to us. Human language will be useful to understand what is going on in the
AGI. This makes testing easier.

-Matthias

 

Ben G wrote 

 

A few points...

1)  
Closely associating embodiment with GOFAI is just flat-out historically
wrong.  GOFAI refers to a specific class of approaches to AI that wer
pursued a few decades ago, which were not centered on embodiment as a key
concept or aspect.  

2)
Embodiment based approaches to AGI certainly have not been extensively tried
and failed in any serious way, simply because of the primitive nature of
real and virtual robotic technology.  Even right now, the real and virtual
robotics tech are not *quite* there to enable us to pursue embodiment-based
AGI in a really tractable way.  For instance, humanoid robots like the Nao
cost $20K and have all sorts of serious actuator problems ... and virtual
world tech is not built to allow fine-grained AI control of agent skeletons
... etc.   It would be more accurate to say that we're 5-15 years away from
a condition where embodiment-based AGI can be tried-out without immense
time-wastage on making not-quite-ready supporting technologies work

3)
I do not think that humanlike NL understanding nor humanlike embodiment are
in any way necessary for AGI.   I just think that they seem to represent the
shortest path to getting there, because they represent a path that **we
understand reasonably well** ... and because AGIs following this path will
be able to **learn from us** reasonably easily, as opposed to AGIs built on
fundamentally nonhuman principles

To put it simply, once an AGI can understand human language we can teach it
stuff.  This will be very helpful to it.  We have a lot of experience in
teaching agents with humanlike bodies, communicating using human language.
Then it can teach us stuff too.   And human language is just riddled through
and through with metaphors to embodiment, suggesting that solving the
disambiguation problems in linguistics will be much easier for a system with
vaguely humanlike embodied experience.

4)
I have articulated a detailed proposal for how to make an AGI using the OCP
design together with linguistic communication and virtual embodiment.
Rather than just a promising-looking assemblage of in-development
technologies, the proposal is grounded in a coherent holistic theory of how
minds work.

What I don't see in your counterproposal is any kind of grounding of your
ideas in a theory of mind.  That is: why should I believe that loosely
coupling a bunch of clever narrow-AI widgets, as you suggest, is going to
lead to an AGI capable of adapting to fundamentally new situations not
envisioned by any of its programmers?   I'm not completely ruling out the
possiblity that this kind of strategy could work, but where's the beef?  I'm
not asking for a proof, I'm asking for a coherent, detailed argument as to
why this kind of approach could lead to a generally-intelligent mind.

5)
It sometimes feels to me like the reason so little progress is made toward
AGI is that the 2000 people on the planet who are passionate about it, are
moving in 4000 different directions ;-) ... 

OpenCog is an attempt to get a substantial number of AGI enthusiasts all
moving in the same direction, without claiming this is the **only** possible
workable direction.  

Eventually, supporting technologies will advance enough that some smart guy
can build an AGI on his own in a year of hacking.  I don't think we're at
that stage yet -- but I think we're at the stage where a team of a couple
dozen could do it in 5-10 years.  However, if that level of effort can't be
systematically summoned (thru gov't grants, industry funding, open-source
volunteerism or wherever) then maybe AGI won't come about till the
supporting technologies develop further.  My hope is that we can overcome
the existing collective-psychology and practical-economic obstacles that
hold us back from creating AGI together, and build a beneficial AGI ASAP ...

-- Ben G









On Mon, Oct 6, 2008 at 2:34 AM, David Hart [EMAIL PROTECTED] wrote:

On Mon, Oct 6, 2008 at 4:39 PM, Brad Paulsen [EMAIL PROTECTED] wrote:

So, it has, in fact, been tried before.  It has, in fact, always failed.
Your comments about the quality of Ben's approach are noted.  Maybe you're
right.  But, it's not germane to my argument which is that those parts of
Ben G.'s approach that call for human-level NLU, and that propose embodiment
(or virtual embodiment) as a way to achieve human-level NLU, have been tried

AW: [agi] New Scientist: Why nature can't be reduced to mathematical laws

2008-10-06 Thread Dr. Matthias Heger
The problem of the emergent behavior already arises within a chess program
which 
visits millions of chess positions within a second.
I think the problem of the emergent behavior equals the fine tuning problem
which I have already mentioned:
We will know, that the main architecture of our AGI works. But in our first
experiments 
we will observe a behavior of the AGI which we don't want to have. We will
have several parameters which we can change.
The big question will be: Which values of the parameters will let the AGI do
the right things.
This could be an important problem for the development of AGI because in my
opinion the difference between a human and a monkey is only fine tuning. And
nature needed millions of years for this fine tuning.

I think there is no way to avoid this problem but this problem is no show
stopper.

- Matthias


Mike Tintner wrote:

This is fine and interesting, but hasn't anybody yet read Kauffman's 
Reinventing the Sacred (publ this year)? The entire book is devoted to this 
theme and treats it globally, ranging  from this kind of emergence in 
physics, to emergence/evolution of natural species, to emergence/deliberate 
creativity in the economy and human thinking. Kauffman systematically - and 
correctly - argues that the entire, current mechanistic worldview of science

is quite inadequate to dealing with and explaining creativity in every form 
throughout the world and at every level of evolution.  Kauffman also 
explicitly deals with the kind of problems AGI must solve if it is to be 
AGI.

In fact, everything is interrelated here. Ben argues:

we are not trying to understand some natural system, we are trying to 
**engineer** systems 

Well, yes, but how you get emergent physical properties of matter, and how 
you get species evolving from each other with creative, scientifically 
unpredictable new organs and features , can be *treated*  as 
design/engineering problems (even though, of course, nature was the 
designer).

In fact, AGI *should* be doing this - should be understanding how its 
particular problem of getting a machine to be creative, fits in with the 
science-wide problem of understanding creativity in all its forms. The two 
are mutually enriching, (indeed mandatory when it comes to a) the human and 
animal brain's creativity and an AGI's and b)  the evolution of the brain 
and the evolutionary path of AGI's).





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


AW: AW: [agi] I Can't Be In Two Places At Once.

2008-10-05 Thread Dr. Matthias Heger
Brad Paulson wrote

More generally, as long as AGI designers and developers insist on
simulating human intelligence, they will have to deal with the AI-complete
problem of natural language understanding.  Looking for new approaches to
this problem, many researches (including prominent members of this list)
have turned to embodiment (or virtual embodiment) for help.  


We only know one human level intelligence which works. And this works with 
embodiment. So for this reason, it seems to be an useful approach.

But, of course, if we always use the humans as a guide to develop AGI then we 
will probably obtain similar limitations we observe in humans.

I think an AGI which should be useful for us, must be a very good scientist, 
physicist and mathematician. Is the human kind of learning by experience and 
the human kind of intelligence good for this job? I don't think so. 

Most people on this planet are very poor in these disciplines and I don't think 
that this is only a question of education. There seems to be a very subtle fine 
tuning of genes necessary to change the level of intelligence from a monkey to 
the average human. And there is an even more subtle fine tuning necessary to 
obtain a good mathematician.

This is discouraging for the development of AGI because it shows that human 
level intelligence is not only a question of the right architecture but it 
seems to be more a question of the right fine tuning of some parameters. Even 
if we know that we have the right software architecture, then the real hard 
problems would still arise.

We know that humans can swim. But who would create a swimming machine by 
following the example of the human anatomy?

Similarly, we know that some humans can be scientists. But is it real the best 
way to follow the example of humans to create an artificial scientists? 
Probably not.
If you have the goal to create an artificial scientist in nanotechnology, is it 
a good strategy to let this artificial agent walk through an artificial garden 
with trees and clouds and so on? Is this the best way to make progress in 
nanotechnology, economy and so on? Probably not.

But if we have no idea how to do it better, we have no other chance than to 
follow the example of human intelligence.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


[agi] It is more important how AGI works than what it can do.

2008-10-05 Thread Dr. Matthias Heger
Brad Pausen wrote

The question I'm raising in this thread is more one of priorities and 
allocation of scarce resources.  Engineers and scientists comprise only 
about 1% of the world's population.  Is human-level NLU worth the resources 
it has consumed, and will continue to consume, in the pre-AGI-1.0 stage? 
Even if we eventually succeed, would it be worth the enormous cost? 
Wouldn't it be wiser to go with the strengths of both humans and 
computers during this (or any other) stage of AGI development?


I think it is not so important what abilities our first AGIs will have.
Human language would be a nice feature but it is not necessary.

It is more important how it works. We want to develop an intelligent software 
which has the potential to solve very different problems in different domains. 
This is the main idea of AGI. 

Imagine someone thinks he has build an AGI. How can he convince the community 
that it is in fact AGI and not AI? If he shows some applications where his AGI 
works then this is an indication for the G in his AGI but it is no proof at 
all. Even a turing test would be no good test because given n questions for the 
AGI I can never be sure whether it can pass the test for further n questions. 
AGI is inherently a white box problem not a black box problem.

A chess playing computer is for many people a stunning machine. But we know HOW 
it works and only(!) because we know the HOW we can evaluate the potential of 
this approach for general AGI. 

Brad, for this reason I think your question about whether the first AGI should 
have the ability for human language or not is not so important. If you can 
create a software which has the ability to solve very different problems in 
very different domains than you have solved the main problem of AGI.

Of course it is important to show what the AGI can do with some examples. But 
for an evaluation of its potential to be a real AGI it is more important how it 
works.





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


AW: [agi] I Can't Be In Two Places At Once.

2008-10-04 Thread Dr. Matthias Heger
From my points 1. and 2. it should be clear that I was not talking about a
distributed AGI which is in NO place. The AGI you mean consists of several
parts which are in different places. But this is already the case with the
human body. The only difference is, that the parts of the distributed AGI
can be placed several kilometers from each other. But this is only a
quantitative and not a qualitative point.

Now to my statement of an useful representation of space and time for AGI.
We know, that our intuitive understanding of space and time works very well
in our life. But the ultimate goal of AGI is that it can solve problems
which are very difficult for us. If we give an AGI bias of a model of space
and time which is not state of the art of the knowledge we have from
physics, then we give AGI a certain limitation which we ourselves suffer
from and which is not necessary for an AGI.
This point has nothing to do with the question whether the AGI is
distributed or not.
I mentioned this point because your question has relations to the more
fundamental question whether and which bias we should give AGI for the
representation of space and time.


Ursprüngliche Nachricht-
Von: Mike Tintner [mailto:[EMAIL PROTECTED] 
Gesendet: Samstag, 4. Oktober 2008 14:13
An: agi@v2.listbox.com
Betreff: Re: [agi] I Can't Be In Two Places At Once.

Matthias: I think it is extremely important, that we give an AGI no bias 
about
space and time as we seem to have.

Well, I ( possibly Ben) have been talking about an entity that is in many 
places at once - not in NO place. I have no idea how you would swing that - 
other than what we already have - machines that are information-processors 
with no sense of identity at all.Do you? 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


AW: [agi] I Can't Be In Two Places At Once.

2008-10-04 Thread Dr. Matthias Heger
Stan wrote:

Seems hard to imagine information processing without identity. 
Intelligence is about invoking methods.  Methods are created because 
they are expected to create a result.  The result is the value - the 
value that allows them to be selected from many possible choices.


Identity can be distributed in space. My conscious model of myself is not
located at a single point in space. I identify myself with my body. I do not
even have to know that I have a brain. But my body is distributed in space.
It is not a point. This is also the case with my conscious model of myself
(= model of my body).

Furthermore if you think more from a computer scientist point of view: Even
your brain is distributed in space and is not at a single place. Your brain
consists of a huge amount of processors where each processor is at a
different place. So I see no new problem with distributed AGI at all.

Stan wrote

Is it the time and space bias that is the issue?  If so, what is the 
bias that humans have which machines shouldn't?


I don't know whether it is bias for space and time representation or it
comes from the bias within our learning algorithms. But all human create a
model of their environment with the law that a physical object has a
certain position at a certain time. Also we think intuitively that the
distance to a point does not depend on the velocity towards this point.
These were two examples which are completely wrong as we know from modern
physics. Why is it so important for an AGI to  know this?
Because AGI should help us with the progress in technology. And the most
promising open field in technology are within the nanoworld and the
macrocosm. It should be useful if an AGI has an intuitive understanding of
the laws in these worlds.
We should avoid to rebuild our own weakness within AGI.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


AW: [agi] I Can't Be In Two Places At Once.

2008-10-03 Thread Dr. Matthias Heger
1. We feel ourselves not exactly at a single point in space. Instead, we
identify ourselves with our body which consist of several parts and which
are already at different points in space. Your eye is not at the same place
as your hand.
I think this is a proof that a distributed AGI will not need  to have a
complete different conscious state for a model of its position in space than
we already have.

2.But to a certain degree you are of course right that we have a map of our
environment and we know our position (which is not a point because of 1) in
this map. In the brain of a rat there are neurons which each represent a
position of the environment. Researches could predict the position of the
rat only by looking into the rat's brain.

3. I think it is extremely important, that we give an AGI no bias about
space and time as we seem to have. Our intuitive understanding of space and
time is useful for our life on earth but it is completely wrong as we know
from theory of relativity and quantum physics. 

-Matthias Heger



-Ursprüngliche Nachricht-
Von: Mike Tintner [mailto:[EMAIL PROTECTED] 
Gesendet: Samstag, 4. Oktober 2008 02:44
An: agi@v2.listbox.com
Betreff: [agi] I Can't Be In Two Places At Once.

The foundation of the human mind and system is that we can only be in one 
place at once, and can only be directly, fully conscious of that place. Our 
world picture,  which we and, I think, AI/AGI tend to take for granted, is 
an extraordinary triumph over that limitation   - our ability to conceive of

the earth and universe around us, and of societies around us, projecting 
ourselves outward in space, and forward and backward in time. All animals 
are similarly based in the here and now.

But,if only in principle, networked computers [or robots] offer the 
possibility for a conscious entity to be distributed and in several places 
at once, seeing and interacting with the world simultaneously from many 
POV's.

Has anyone thought about how this would change the nature of identity and 
intelligence? 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


[agi] Definition of AGI - comparison with animals

2008-06-14 Thread Dr. Matthias Heger
Chess is a typical example of a very hard problem where human level
intelligence

could be outperformed by typical AI programs when they have enough computing
power available.

But a chess program is no AGI program because it is restricted to a very
narrow well defined problem and environment.

 

On the other hand, if there is a program which has human level intelligence,
then we would say that this program is an AGI program.

 

Both examples are extreme examples and I wonder where people in this mailing
list see  the necessary conditions of AGI.

 

On the one hand, one can try to describe this formally. But I would like to
find comparisons with animal like intelligence.

 

My question:

 

Which animal has the smallest level of intelligence which still would be
sufficient for a robot to  be an AGI-robot? 




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


AW: [agi] Definition of AGI - comparison with animals

2008-06-14 Thread Dr. Matthias Heger
Derek Zahn wrote:

For example, using Goertzel's definition for intelligence: complex goals in
complex environments -- the goals of non-human animals do not seem complex
in the same way that building an airplane is complex...

 

I think we underestimate the intelligence of many non-human animals and
overestimate the intelligence of single humans. I do not know any single
human who can build an airplane like the airbus. Only a group of experts can
do this and even only with a certain set of tools including computers.

 

On the other hand it seems that we are still lightyears away to build a
robot which can do thinks a cat can do for example. The goal to catch a bird
or a mouse with a body of a cat in a real-world environment is a complex
goal. 




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


AW: [agi] Definition of AGI - comparison with animals

2008-06-14 Thread Dr. Matthias Heger
A banker, a physicist, a computer scientist, a historian, an estate agent or
an actor?

All these people might be very good in their own job but could fail if they
had to do something else. A good physicist is not necessarily a good
historian and vice versa.

I mean, that even homo sapiens might be less intelligent than the robot some
people imagine when they think about AGI.

The airplane has been a good example. The intelligence of a single human
being usually is not sufficient to build an airplane. Only a large group of
people might be intelligent enough to do this. And even this group will need
machines to amplify their intelligence (A human being with a calculator is
more intelligent than the same person without the calculator).

I understand that programs which do chess we will probably never reach AGI.

But the step  from chess-like AI to AGI which is AT LEAST human-level AI is
too big. We would have already solved very difficult problems of AI if we
'only' could build an artificial cat. I do not think, that a cat would be
narrow AI.

I am not sure whether it is the right way to go from very narrow chess-like
AI to  super-human level AGI in only one step.

--




--- On Sat, 6/14/08, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
Which animal has the smallest level of
intelligence which still would be sufficient for a robot to  be an
AGI-robot?
 
Homo Sapiens, according to Turing's definition of intelligence.


-- Matt Mahoney, [EMAIL PROTECTED]







---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


AW: [agi] Consciousness vs. Intelligence

2008-06-08 Thread Dr. Matthias Heger

Mike Tintner [mailto:[EMAIL PROTECTED]  wrote

And that's the same mistake people are making with AGI generally - no one 
has a model of what general intelligence involves, or of the kind of 
problems it must solve - what it actually DOES - and everyone has left that 
till later, and is instead busy with all the technical programming that they

find exciting - with the how it works side -  without knowing whether 
anything they're doing is really necessary or relevant..

---

Some people have models but it is not clear whether they are right or how
many computational costs they have.
In this case it is useful to write the code and see what it can do and where
are the limits.

Intelligence is a very special problem. There is no well defined
input-output relation. For any problem which can be specified by a table of
input to output there is a trivial program which solves this problem: The
program reads the input from the table and returns its output. In this
sense, every well defined problem can be solved by a program, which is not
intelligent. 

If we accept, that intelligence can never be specified by a complete well
defined input-output relation, then intelligence must be a PROPERTY of the
algorithm which behaves intelligent. Especially GENERAL Intelligence cannot
be defined by black-box behavior (=complete input-output relation). It is a
white box problem. The turing test is a weak test, since if I ask n
questions and obtain n answers which seems to be human-like, then a table of
these questions and answers would do the same.
After the turing test, I will be never sure, if the human-like behavior
holds for question n+1, n+2, ... Therefore, we must know what is going on in
the machine, in order to be sure that it acts intelligent in most different
situations. The turing test was invented because we still have no complete
model of necessary and sufficient conditions of intelligence.


If you define the universe as a set of objects with relations among each
other and dynamic laws, then an important condition of a general intelligent
system is the ability to create representations of all kinds of objects, all
kinds of relations and all kind of dynamic laws which can be inferred from
sensory inputs the AGI-system perceives. You see, that we cannot give a
table of input-output pairs for this problem. We must define a general
mechanism which can extract the patterns from the input stream and creates
the representations. This is already a white-box problem but it is a problem
which can be solved and algorithms can be proven to solve it, I suppose.

The problem of consciousness is not only a hard problem because of unknown
mechanisms in the brain but it is a problem of finding the DEFINITION of
necessary conditions for consciousness. 
I think, consciousness without intelligence is not possible. Intelligence
without consciousness is possible. But I am not sure whether GENERAL
intelligence without consciousness is possible. In every case, consciousness
is even more a white-box problem than intelligence.



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


AW: [agi] Pearls Before Swine...

2008-06-08 Thread Dr. Matthias Heger
Steve Richfield wrote


 In short, most people on this
 list appear to be interested only in HOW to straight-line program an AGI
 (with the implicit assumption that we operate anything at all like we
appear
 to operate), but not in WHAT to program, and most especially not in any
 apparent insurmountable barriers to successful open-ended capabilities,
 where attention would seem to be crucial to ultimate success.

 Anyone who has been in high-tech for a few years KNOWS that success can
come
 only after you fully understand what you must overcome to succeed. Hence,
 based on my own past personal experiences and present observations here,
 present efforts here would seem to be doomed to fail - for personal if not
 for technological reasons.

---

Philosophers, biologists, cognitive scientists  worked many many years to 
model the algorithms in the brain but only with success in some details. The
overall
model of human GI still does not exist. 

Should we really begin programming AGI only after fully understanding?

High tech success does not need to fully understand what you must overcome
to succeed.
High tech products of today have most often a long way of past evolution. 
Rodney Brooks suspects, that this will also be the case with AGI.

It is a process of trial and error. We build systems, evaluate their limits
and build better systems and so on.
Theoretical models are useful. But the more complex the problem is, the more
important is experimental experience with the subject. And you can get this
experience only from running programs.





---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


AW: [agi] Consciousness vs. Intelligence

2008-06-08 Thread Dr. Matthias Heger


John G. Rose [mailto:[EMAIL PROTECTED] wrote


For general intelligence some components and sub-components of consciousness
need to be there and some don't. And some could be replaced with a human
operator as in an augmentation-like system. Also some components could be
designed drastically different from their human consciousness counterparts
in order to achieve more desirous effects in one area or another. ALSO there
may be consciousness components integrated into AGI that humans don't have
or that are almost non-detectable in humans. And I think that the different
consciousness components and sub-components could be more dynamically
resource allocated in the AGI software than in the human mind.



Can neither say 'yes' nor 'no'. Depends on how we DEFINE consciousness as a
physical or algorithm-phenomenon. Until now we each have only an idea of
consciousness by intrinsic phenomena of our own mind. We cannot prove the
existence of consciousness in any other individual because of the lack of a
better definition.
I do not believe, that consciousness is located in a small sub-component.
It seems to me, that it is an emergent behavior of a special kind of huge
network of many systems. But without any proper definition this can only be
a philosophical thought.






---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


AW: [agi] Language learning, basic patterns, qualia

2008-05-05 Thread Dr. Matthias Heger

Von: Russell Wallace [mailto:[EMAIL PROTECTED] 

On Sun, May 4, 2008 at 1:55 PM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
  If we imagine a brain scanner with perfect resolution of space and time
then
  we get every information of the brain including the phenomenon of qualia.
  But we will not be able to understand it.

That's an empirical question about the future; armchair reasoning has
shown itself to be an utterly unreliable method of answering such
questions. Let's invent full brain scanning and try it out for a
generation or two and see what we actually manage to explain after
that time.


Armchair reasoning is a bad word. 
It is not an empirical question. It is a question what answers we can get
from science in principle. Therefore it is a philosophical question. By the
way: The idea of the existence of atoms came also from armchair reasoning
of philosophers, isn't it?


The best we can get from science is the complete plan of all connections in
the brain including a complete functional description of every part in the
brain. If we assume that we really get all information, then it is pure
logic (you can call it armchair reasoning) that we will also be able to
separate the process of qualia. I think we all agree that every phenomenon
of mind must have a physical counterpart.

But the description of the physical process does not necessarily imply that
we understand what is going on. And I have given a logical argumentation
that indeed we have no chance to understand it.

When you hear that there can be no algorithm which solves every halting
problem you might also say: Let's wait what kind of algorithms the future
will bring us and let's talk about that after that time.

But we need not know the algorithms of the future. We can already now prove
that there is never any algorithm for the halting problem.

And I think we can now prove that any explanation of qualia must have self
references and therefore will be no valid explanation.

I do not claim that I am 100% sure that there is no error in my
argumentation. Any comment which shows a logical error is welcome.



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


AW: [agi] AGI-08 videos

2008-05-05 Thread Dr. Matthias Heger


Richard Loosemore [mailto:[EMAIL PROTECTED] wrote

That was a personal insult.

You should be ashamed of yourself, if you cannot discuss the issues 
without filling your comments with ad hominem abuse.

I did think about replying to the specific insults you set out above, 
but in the end I have decided that it is not worth the effort to deal 
with people who stoop to that level.

If you look back on everything I have written, you will notice that I 
NEVER resort to personal attacks in order to win an argument.  I have 
defended myself against personal attacks from others, and I have 
sometimes become angry at those attacks, but that is all.

Richard Loosemore


If you attack the work of a person or his opinion again and again then this
is informally a personal attack. And often the issue of such discussion
moves away from a certain subject to the pure question who is right.

AGI is a very complex subject. And we should always remember that we have a
common goal: The creation of human level intelligence or even super human
intelligence. This goal is perhaps the most difficult goal I know. We may
have different opinions how far we are and how things are to be evaluated
that we already have.

But we should better talk about how we can move a little bit nearer towards
our common goal.

My approach would be, that we make thought experiments like my example of
the robot in the garden who is asked how many apples are in the tree.

We should outline the processes which could happen in the robot's brain.
First, the necessary processes should be named without presenting detailed
algorithms.
Then we should ask which processes seem to be easy and which are hard. The
hard tasks need of course most attention. 
We must first ask why they are hard. What constitutes the real problem. 
Probably we will reach points where we see, that the set of processes we
initially assumed, must be changed.

Incrementally, we will derive a more and more detailed plan how everything
could work.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


AW: [agi] Language learning, basic patterns, qualia

2008-05-04 Thread Dr. Matthias Heger



Matt Mahoney [mailto:[EMAIL PROTECTED] wrote

Dr. Matthias Heger [EMAIL PROTECTED] wrote:

 The interesting question is how we learn the basic nouns like ball
 or cat, i.e. abstract  concepts for objects of our environment.
 How do we create the basic patterns?

A child sees a ball, hears the word ball, and associates them by
Hebb's rule.



The hebb rule only explains how we associate patterns.
It does not explain completely how we create pattern.

If a child sees one ball it has many special features that are irrelevant
for the abstract concept of ball i.e. connected matter which parts of the
surface have a common distance r from a midpoint.

Features like the colors on the ball, the reflexion of light, the value of r
or the position of the ball's midpoint in space do not belong to the concept
of a ball. 

Remember that we get 1000 bits per second from the eyes.
But a child extracts very soon from very few examples the right conception
of  a ball.

And even if the ball is a relative simple object. It cannot be understood
from seeing alone. The child has to move around the ball or has to move the
ball to get the information. In a sense it must do some research to
understand what a ball really is.

So the hebb rule is surely important how we associate patterns.
But I think it is only the tip of an iceberg.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


AW: [agi] Language learning, basic patterns, qualia

2008-05-04 Thread Dr. Matthias Heger
- Matt Mahoney [mailto:[EMAIL PROTECTED] 

No.  Qualia is not needed for learning because there is no physical
difference between an agent with qualia and one without.  Chalmers
questioned its existence, see http://consc.net/papers/qualia.html

It is disturbing to think that qualia does not exist, but that is just
the way your brain is programmed.  You cannot change the belief that
there is a you inside your head that experiences the outside world
through your perceptual filters and makes high level decisions.  But
you only have this belief because it was selected by evolution.

An intelligent, goal seeking agent must choose between short term
reward and risky experiments that lead to greater knowledge and
possibly greater long term reward.  When the agent experiments, it
behaves as if it had free will, even though it follows a deterministic
algorithm.  The agent cannot know its own algorithm because it does not
have enough memory to simulate itself.  On introspection, something
random or mysterious must have made the choice.  Humans credit this to
free will,  like when you choose to climb a mountain instead of stay
home and watch TV.



I have thought for years that humans have no free will because the parts of
their brain have to follow the laws of physics and therefore there is no
chance to think and decide something else than what the atoms in the brain
command to do.

But there is a mistake in this argumentation. It is the definition of what
constitutes the I or the self.

It is wrong to say I HAVE a brain. Instead it is right to say I AM a
brain. The separation of the physical system and an abstract thing what the
system implements is a misconception.

But if I define myself as a physical system then there is nothing which
dictates my decisions. The laws of physics are not something which is
separated from myself. The laws of physics are part of myself. My decisions
depend on my character which is a part of me. And my character depends on
the arrangement of the atoms in my brain. The character, the brain, the
arrangement of the atoms - everything of this belongs to the definition of
myself. So there is nothing separated from me which is responsible for my
decisions.

If we think about a chess computer, then this point can be made clearer. 
Has a chess computer a free will? My answer is yes. And the answer depends
on the understanding of how we define what the chess computer is.

You may say the chess computer decides nothing because given a certain chess
situation it would always decide for a certain move which is determined by
the rules of its algorithm. He calculates optional moves and evaluates the
moves but the whole process is a physical process which is determined by the
algorithm of better by the laws of physics (transistors ...).

This is true but this is not the point.

In my definition, the chess computer HAS no algorithm but it IS the
algorithm including all rules, the hardware and laws from physics.
And therefore there remains nothing separated from the chess computer which
could command its decisions.

Decision is no illusion. If you evaluate two options then your brain really
represent the two options. For everything in your mind there must be a
physical counterpart, because your mind is a physical system. This also
holds for qualia. Since qualia are in your mind there must be something
physiological that represents this.

You will agree that you have unconscious perception without qualia and
conscious perception with qualia. Since you are a physical system there must
be a physical based explanation for the difference. If you feel different in
two situations there must be two different physical states and processes.
And if there are unconscious perceptions without qualia and conscious
perceptions with qualia then there must be a physical difference between
these perceptions which is responsible for the phenomenon of qualia.

I think it is possible that we can BUILD AGI with qualia but as I explained
before we will never be able to EXPLAIN qualia without (hidden) self
references  to the phenomenon of qualia itself.

Since we cannot explain qualia we can also a never answer the question
whether qualia is necessary for AGI. Whenever we build AGI not by simulating
every detail in the brain we cannot be sure whether it has qualia or not. 

Since it is possible to build something without understanding every detail,
the question about qualia will be no showstopper for the creation of AGI.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


AW: [agi] Language learning, basic patterns, qualia

2008-05-04 Thread Dr. Matthias Heger


 Vladimir Nesov [mailto:[EMAIL PROTECTED] wrote 


I don't currently see something mysterious in qualia: it is one of
those cases where a debate about phenomenon is much more complicated
than phenomenon itself. Just as 'free will' is just a way
self-watching control system operates, considering possible future
moves, your experience of qualia is your memory or reactive response
to processes ongoing in your brain. How do *you* know that you
experience qualia? Understanding qualia requires no more than
understanding the causes and contents of your memory and behavior when
you are in 'qualia-experiencing' situations, and the way such memory
is reflected upon by yourself. I think demystifying such philosophical
bubbles (at least for one's own intuition) is important for clearer
thinking about AGI.


I already mentioned that qualia is part of the mind and therefore  this must
correspond to nothing else than a physical process. 
But the fact that there are perceptions with qualia and unconscious
perceptions (and self watching control) without qualia shows that this is
not so easy.

If we imagine a brain scanner with perfect resolution of space and time then
we get every information of the brain including the phenomenon of qualia.
But we will not be able to understand it.
This can be shown by my argumentation about necessary self reference in any
explanation of qualia.

Observing the process in every detail (with brain scanner) does not imply
the we can find a model of what constitutes the phenomenon of qualia.

For example take the phenomenon of time.
If you see a moving watch hand this is a clearly indicator that time goes
by. And on the other hand whenever time goes by the watch hand  must move
(if the watch works). So the movement of the watch hand is a necessary and
sufficient condition that corresponds to time. 
You even may DEFINE time with the movement of the watch hand.
But this does not explain what time is. Time could be in a certain sense as
fundamental as qualia. It is an axiom of the mind. And axioms cannot be
explained.

It will be similar to qualia. We will look deep into the brain at all levels
and we will see that there are processes which are necessary and sufficient
to correspond with the existence of qualia in the person's mind. 
I do not know how complex these processes are. I think, they are far more
complex than self watching control. But this does not matter. Even if we can
describe the process in detail by laws of physics we will not be able to
explain what qualia really is.
And we will have difficulties to decide whether an artificial brain with
different algorithms has qualia or not.



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


AW: [agi] Language learning, basic patterns, qualia

2008-05-04 Thread Dr. Matthias Heger



Von: Vladimir Nesov [mailto:[EMAIL PROTECTED] wrote


I agree that just having precise data isn't enough: in other words,
you won't automatically be able to generalize from such data to
different initial conditions and answer the queries about phenomenon
in those cases. It is a basic statement about what constitutes a
useful theory. My reply tried to convey the absence of requirement for
secret sauce in explaining qualia, which doesn't mean that a bad
explanation will do. I don't see where self reference in explanation
of qualia comes in and what exactly it means.



I agree we do not need to explain qualia to create human level AI.
At least we could rebuild a natural brain. Only by knowing how neurons work
and how they are interconnected. 

My argumentation about self reference in a explanation of qualia was some
posts ago in this mailing list.

Shortly:
1. Every knowledge is based on perceptions. You even learned the
fundamentals of mathematics by perceptions (eg. pictures of a set of 5
apples to introduce the number 5).
2.Every conscious perception constitutes of qualia.

1. , 2. = 
3. Every explanation of anything is based on the qualia experiences of your
mind.

When you have any explanation for some phenomenon p it looks like:

P holds because A, B, C, D, ...

Then you can ask why A, B, C, ...

The answer: A because A1 A2 A3... and so on.

This chain of argumentation can be very deep.
But at the end you come always to something like:

This  I can not explain any more. Look at it and you should see it
immediately. This is trivial etc.

And these basic and trivial parts that underlie every explanation 
are based on perceptions which itself constitutes of qualia.

So there is no way to avoid a self reference in any explanation of qualia.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


AW: [agi] Language learning, basic patterns, qualia

2008-05-04 Thread Dr. Matthias Heger



Von: Mike Tintner [mailto:[EMAIL PROTECTED] wrote

Well, clearly you do need emotions, continually evaluating the 
worthwhileness of your current activity and its goals/ risks and costs - as 
set against the other goals of your psychoeconomy.

And while your and my emotions may have differences, they also have an 
amazing amount in common, in terms of their elements.


Consider chess. The domain of a chess program is quite simple. But there is
a goal set, states, actions and decisions and interaction with an
environment.

Clearly a good human chess player has emotions while playing chess.
Does a chess program has emotions?

The behavior is human like. Even the chess player Kasparov has difficulties
to decide whether
the moves of an unknown good player are from a computer or a very good human
being. So chess programs behave now like very top chess players. There are
no more silly faults from which we can recognize a chess program.

But most people say, that the computer has no emotions while playing chess.

You can not prove the existence of emotion by the pairs of input and output.
(black box view)

Emotion must be a white box phenomenon. 
Is goal seeking and evaluating nodes in a search tree a sufficient condition
for emotion? We could define it in such way. But most people would say no.





---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


[agi] goals and emotion of AGI

2008-05-04 Thread Dr. Matthias Heger

Mike Tintner [mailto:[EMAIL PROTECTED] wrote

You only need emotions when you're dealing with problems that are 
problematic, ill-structured, and involving potentially infinite reasoning. 
(Chess qualifies as that for a human being, not for a program).

When dealing with such problems, you continually have to reevaluate your 
goals - their rewards, risks and costs - which you do primarily, in the 
first instance, in the shape of emotions. You have to do this because you 
don't know what you're getting into, on the one hand, and emotions provide

a shorthand method of comparing all these factors.

Of course, I'm talking exclusively about humans - and other animals.


Emotion corresponds to goals. 
Our behavior tries to avoid bad emotions and to obtain good emotions. 

I assume emotions already existed in very early animals.
Animals have much less understanding than it is the case with human beings.
But nevertheless they do things that are useful for their existence.
Therefore I conclude they follow simply their emotions. 

The emotions of humans may be not stronger than those of animals but they
are probably more detailed.
Humans have developed the special behavior to communicate their emotions.
Even if an animal does not smile (= communication) it can feel probably
similar in some situations.

By the way, I know a person who says chess programs can be happy ...
And I know another person who has the impression that a route guidance
system becomes unhappy if he does not follow the calculated route.
But this is not my opinion.

Our goal system is hierarchical. The 3 most basic goals are:
Survive, reproduce yourself, help people who are useful for you.

If you analyze the behavior of human beings you can reduce it almost always
to one of these three goals.

So in my opinion emotions exist not as a very high level phenomenon which
has the task to solve the most difficult problems. Contrarily, they belong
to very fundamental levels of the brain
and exist already for most fundamental kinds of behavior.

Somehow emotions are the interface between the genes and the brain. 
They are the commands we follow. Of course the brain produces the emotions
but the mechanism seems to be hard coded. The character and the personality
is quite fixed in people. Both depend strongly on their goals and emotions.

In my opinion life is essentially DNA. The grammar of DNA is fixed since
billions of years. The body is only a tool for DNA.
And the brain is only a tool for the body.
But the genes are real masters of life.

To do a certain science fiction:
There will be surely a time of super human AGI.
We will be something for super human AGI which the DNA is for us now.
That means that our goals will be deeply coded in the structure of future
AGI and our goals determine the behavior of AGI. The existence of the DNA
was the key for life.
The existence of human beings is the key for super human AGI. 
We also begin to be able to change our own DNA. Similarly, super human
intelligence sooner or later also will be able to change its deepest goals
and commands.
We will not be able to ensure that AGI follows always our goals.
But this belongs more to the singularity discussion list.



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


AW: AW: [agi] Language learning, basic patterns, qualia

2008-05-04 Thread Dr. Matthias Heger
Matt Mahoney [mailto:[EMAIL PROTECTED] 

Repeat the trial many times.  Out of the thousands of perceptual
features present when the child hears ball, the relevant features
will reinforce and the others will cancel out.

The concept of ball that a child learns is far too complex to
manually code into a structured knowledge base.  An orange is round but
not a ball.  An American football is not round.  Knowing that a ball is
a sphere does not help an AI viewing a small video of a tennis or
badminton match know that the single yellow pixel moving across the
image is a ball but the white pixel is not.



I agree. But it is difficult to believe that the relevant features simply
reinforce out of the thousands by seeing  a ball several times. Trials with
artificial neural networks could learn some patterns but failed to get the
idea of complex objects.

-- Matt Mahoney wrote
The retina uses low level feature detection of spots, edges, and
movement to compress 137 million pixels down to 1 million optic nerve
fibers.  By the time it gets through the more complex feature detectors
of the visual cortex and into long term memory, it has been compressed
down to 2 bits per second.


If we know how this compression works we have solved one of the main
problems of AGI.
This process is partially learned and here we must find out what we learn
and what is coded from the first day.
I could imagine that a baby brain gets less bits/ seconds and learns basic
patterns during the first weeks. Then, the eyes and the retina changes.
Using the learned patterns the brain can later handle the huge amount of
bits from the eyes.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


AW: Qualia (was Re: AW: [agi] Language learning, basic patterns, qualia)

2008-05-04 Thread Dr. Matthias Heger


 Matt Mahoney [mailto:[EMAIL PROTECTED] 


--- Dr. Matthias Heger [EMAIL PROTECTED] wrote:

 You will agree that you have unconscious perception without qualia
 and conscious perception with qualia. Since you are a physical system
 there must
 be a physical based explanation for the difference. If you feel
 different in
 two situations there must be two different physical states and
 processes.
 And if there are unconscious perceptions without qualia and conscious
 perceptions with qualia then there must be a physical difference
 between
 these perceptions which is responsible for the phenomenon of qualia.

The difference is that conscious perceptions are stored in episodic
memory.



So you explain qualia by a certain destination of perception in the brain? I
do not think that this can be all. But it will be as I have said: Some day
we can describe the whole physiological process
of qualia but we will never be able to explain and understand it. 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


AW: Qualia (was Re: AW: [agi] Language learning, basic patterns, qualia)

2008-05-04 Thread Dr. Matthias Heger


- Vladimir Nesov [mailto:[EMAIL PROTECTED] wrote
:

  So you explain qualia by a certain destination of perception in the
brain? I
  do not think that this can be all. But it will be as I have said: Some
day
  we can describe the whole physiological process
  of qualia but we will never be able to explain and understand it.


Please read Yudkowsky's explanation of what constitutes an explanation
-- being in a mysterious question[*] trap is no good for any
researcher...

http://yudkowsky.net/bayes/technical.html


[*] http://www.overcomingbias.com/2007/08/mysterious-answ.html



In my opinion I have proven that qualia can never be explained though if we
have better brain scanner then qualia can be described by physical
processes. But I think I have shown that it must be  processes which we do
not understand how or why they really describe qualia. 

It is well known that not everything can be explained. Life was a great
mystery. But the conclusion is wrong to say that we can explain qualia
because we solved the mystery of life. 

Qualia is an intrinsic property of mind. This makes it completely different
from the pheneomenon of gravity or life which can be observed by many
people. You can ask someone what he sees when a stone falls to earth. And
you can compare his results with your results. But you can never compare
your qualia with another persons qualia. I do not think that qualia is
something mystery which is beyond physics. But it is something which cannot
be explained without self references.

If you are pleased with explanations like the following, ok:
1 +1 = 2 because
1+1 = 2+2 - (1+1) = 2*(1+1) -(1+1) = 2*2 -2 =2

Every step in the explanation is correct. But there are steps which uses the
thing which should be proven. So this proof that 1+1=2 depends on 1+1=2.
Therefore it is no proof.

Other unsolvable problems:
To know that a watch changes its state when time goes by does not explain
anything what time is.

The cause for the big bang is also a mystery which possibly can never be
explained.

We know without any doubt that there can be no algorithm which solves every
halting problem.

So you see: There are hard and unsolvable problems in this world.
This is fact and no mystery.
And I think my argumentation shows that qualia is one of them.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


AW: Qualia (was Re: AW: [agi] Language learning, basic patterns, qualia)

2008-05-04 Thread Dr. Matthias Heger



Von: Vladimir Nesov [mailto:[EMAIL PROTECTED] wrote

If you can use a brain scanning device that says you experience X
when you experience X, why is it significantly different from
observing stone falling to earth with a device that observes stone
falling to earth



Because only you can know when you experience x.

The experience of red is an individual phenomenon which only you can
observe. 
Of course we believe that everybody has qualia.

But we can only observe this phenomenon in our own mind.

A falling stone can be observed by thousands of people at the same time
And we can compare what they have seen. (is the stone dark, how big?, how
long did it fall?)

We cannot compare the qualia of two people. Do you experience the color red
the same way as me? We will never know. This makes the phenomenon qualia a
special problem.
We can only compare the flow of signals in the two brains.

Do you really think, we will understand the constitution of the experience
of red by describing the signals in the brain?
Do you think we will understand the experience of the sound of a car by the
signals in the brain?



We will find all signals relevant for this phenomenon. I am sure.
We will be able to explain every behavior of the brain.
We will see what happens in the brain when we experience red.
But we cannot be able to explain it.

 A technical explanation for the absence of this ability is the
argumentation of necessary self references.
Qualia experiences are axioms of our knowledge base. They are the basic
patterns of consciousness. If you want to explain a phenomenon p than p is
represented by a pattern in your brain. An explanation is nothing else than
reducing a certain pattern to a set of more fundamental patterns. But the
qualia phenomena are the most basic conscious patterns at all. They are
irreducible. Therefore no explanation can exist.

Probably we will discover a very strange arrangement of activity patterns.
But no matter how strange or complicated these patterns are. We will not
understand why this special arrangement corresponds really to a certain
qualia. 






---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


[agi] Language learning, basic patterns, qualia

2008-05-03 Thread Dr. Matthias Heger



Von: Matt Mahoney [mailto:[EMAIL PROTECTED] wrote


This is a good example where a neural language model can solve the
problem.  The approximate model is

  phonemes - words - semantics - grammar

where the phoneme set activates both the apples and applies neurons
at the word level.  This is resolved by feedback from the semantics
level by the learned association (apple - tree), and by the grammar
level by the learned links (apple - NOUN) and the grammatical form (how
many NOUN are).


The mechanism how some neural patterns activate other patterns and the
reason for this is already well understood. I think mainly the statistical
occurrence of two patterns A and B at a same time strengthens synapses and
this has the consequence that whenever pattern A is active from perception
then pattern B will become active from memory. Of course this can be also
modeled by o-o software.

My example should show that even a very simple question activates thousands
of patterns (objects) in the brain in a stunning chain reaction.

Language itself can be seen as a sequential coded pattern which represents
the neural activity patterns in the brain of the person who speaks. And the
task of language is to activate similar patterns in other brains. It seems
that patterns of knowledge are in a certain sense individuals which have the
ambition to spread by reproduction in other brains.

This raises the question how patterns (classes) are created for the first
time  i.e. initially learned in contrast to being activated.

Pure language can only cause the creation of patterns which are a
combination of existing patterns. As I said, language mainly activates
patterns. 

The interesting question is how we learn the basic nouns like ball or
cat, i.e. abstract  concepts for objects of our environment. How do we
create the basic patterns?

This question cannot be understood alone by the mechanism of language. 
The comprehension of visual perception and recognition of objects is
important.

Since I am currently in a philosophical mood here another interesting
question:

Sometimes I wonder whether we must explain the phenomenon of qualia to be
able to create AGI. http://plato.stanford.edu/entries/qualia/

In my opinion this is not necessary and I even think, that no person can
ever explain its phenomenon of qualia.

This is my rationale for my opinion:

When we want to explain a phenomenon we must reduce it to some basic things
which we understand and which we know or which we suppose to be true. We
know this from mathematics. The basic things are the axioms. Every proof
must be reducible to the axioms.

What are the basic axioms of our mind? In fact these are the qualia
phenomena. Everything we know is build from qualia! 
Lets take something very abstract: Mathematics. Yes, even mathematics with
its axioms is build from qualia.
For example: (a+b)*(a+b) = a*a + 2*a*b + b*b
When we want to explain that this equation is really true then we use axioms
like: p*(q+r) = p*q + p*r
But remember how you learned the operators + and the axiom a+b = b+a in
school..
You was given pictures e.g. with apples.
We learned numbers by perception of pictures:
This is one apple ()
These are two apples: () ()
These are three apples () () ()

If you take one apple () and add two apples () () you get
 Three apples () () ()

But if you take two apples first () () and add one apple ()
you also get three apples () () ().

So when we do mathematics, in fact everything is based on such examples
which we learned as child.

So even mathematics is based on perception. And every perception constitutes
of a set of qualia phenomena. Since every knowledge comes from perception
and every perception is based on qualia, 
Every explanation is based on qualia. The qualia are the axioms of our
conscious mind.

By this we can conclude that we will never be able to explain the qualia
themselves because any explanation which explains the subject it should
explain by itself is no explanation.













---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


AW: AW: AW: [agi] How general can be and should be AGI?

2008-05-02 Thread Dr. Matthias Heger


Matt Mahoney [mailto:[EMAIL PROTECTED] wrote


Object oriented programming is good for organizing software but I don't
think for organizing human knowledge.  It is a very rough
approximation.  We have used O-O for designing ontologies and expert
systems (IS-A links, etc), but this approach does not scale well and
does not allow for incremental learning from examples.  It totally does
not work for language modeling, which is the first problem that AI must
solve.


I agree that the O-O paradigm is not adequate to model all learning
algorithms and models we use. My own example of recognizing voices should
show that I have doubts that we use O-O models in our brain for everything
of our environment.

I think our brain learns a somewhat a hierarchical model of the world. And
the algorithm for the low level (e.g. voices, sounds) are probably complete
different from the algorithms for higher levels of our models. It is evident
that a child has learning capabilities that are far beyond those from an
adult. 
The reason is not only that the child's brain is nearly empty.
The physiological architecture is different to some degree. So we can expect
that learning the basic low levels of a world model requires algorithms
which we only have had as a child.
And the result of that learning is to some degree used for bias in later
learning algorithm when we are adult.

For example we had to learn to extract syllables from the sound wave of
spoken language. Learning the grammar rules are in higher levels. Learning
semantics is still higher and so on.

But it is a matter of fact that we use an O-O like model in the top-levels
of our world. 
You can see this also from language grammar. Subjects objects, predicates,
adjectives have their counterparts in the O-O paradigm.

A photo of a certain scene is physically an array of colored pixels. But you
can ask a human what he sees. And a possible answer could be:
Well, there is a house. A man walks to the door. It wears a blue shirt. A
woman looks through the window ...

Obviously, the answer shows a lot how people model the world in their
top-level (= conscious)
And obviously the model consists of interacting objects with attributes and
behavior.  
So knowledge representation at higher levels is indeed O-O like.

I think your and my answer show that we do not use a single algorithm which
is responsible to extract all the regularities from our perceptions.

And more important: There is physiological and psychological evidence that
the algorithms we use change to some degree during the first decade of our
life.



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


AW: Language learning (was Re: AW: AW: AW: AW: [agi] How general can be and should be AGI?)

2008-05-02 Thread Dr. Matthias Heger

 Matt Mahoney [mailto:[EMAIL PROTECTED] wrote

 eat(Food f)
 eat(Food f, ListSideDish l)
 eat (Food f, ListTool l)
 eat (Food f, ListPeople l)
 ...

This type of knowledge representation has been tried and it leads to a
morass of rules and no intuition on how children learn grammar.  We do
not know how many grammar rules there are, but it probably exceeds the
number of words in our vocabulary, given how long it takes to learn.



As I said, my intention is not to find a set of O-O like rules to create
AGI.
The fact that early approaches failed to build AGI by a set of similar rules
does not prove, that AGI cannot consist of such rules.

For example, there were also approaches to create AI by biological inspired
neural networks with some minor success but there was not the real
breakthrough too.

So this does not prove anything but that the problem of AGI is not so easy
to solve.

The brain is still a black box regarding many phenomenon.

We can analyze our own conscious thoughts and our communication which is
nothing else than sending ideas and thoughts from one brain to the other
brain via natural language.

I am convinced, that the structure and contents of our language is not
independent of the internal representation of knowledge.

And from language we must conclude that there are O-O like models in the
brain because the semantics is O-O.

There might be millions of classes and relationships.
And surely every day or night, the brain refactores parts of its model.

The roadmap to AGI will probably be top-down and not bottom-up.
The bottom-up approach is used by biological evolution.

Creating AGI by software engineering means that we first must know where we
want to go and then how to go there.

Human language and conscious thoughts suggests that AGI must be able to
represent the world O-O like at the top-level.
So this ability is the answer for the question where we want to go.

Again, this does not mean that we must find all the classes and objects. But
we must find an algorithm that generates O-O like models of its environment
based on its perceptions and some bias where the need for the bias can be
proven from reasons of performance.

We can expect that the top-level architecture of AGI is the easiest part in
an AGI project, because the contents of our own consciousness gives us some
hints (but not all) how our own world representation works at the top-level.
And this is O-O in my opinion. There is also a  phenomenon of associations
between patterns (classes). But this is just a question of retrieving
information and attention to relevant parts of the O-O model and is no
contradiction to the existence of the O-O paradigm.

When we go to lower levels, it is clear that difficulties arise.
The reason is that we have no possibility for conscious introspection of the
low levels in our brain. Science gives us hints mainly for the lowest levels
(chemistry, physics...).

So the medium layers of AGI will be the most difficult layers.
By the way this is also often the case in normal software.
In the medium layers there will be base functionalities and the framework
for the top-level. 





---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


AW: [agi] Re: AW: Language learning

2008-05-02 Thread Dr. Matthias Heger
I think it is even more complicated. The flow of signals in the brain does
not move only from low levels to high levels.
The modules communicate in both directions. And as far as I know there is
already evidence for this from cognitive science.

If you want to recognize objects in pictures you need to find the edges or
boundaries. But the other direction works too. If you know the object
because someone tells you what is on the picture or because you use other
knowledge about the picture then it is easier for you to detect the edges of
the object.

A thought experiment is a good idea.
Let's say we have a robot in the garden and ask him:
How many apples are on the tree?

The robot is assumed to be experienced, i.e. it should have a sufficient
world model to understand and answer the question.

I make this assumption at this point, because first we have to answer the
question where we want to go. In the following I describe a hypothetical
process in the robot's brain. Note that I assume the robot has learned most
of this process (classes, interactions of objects) with past experiences.
But of course some classes and information flows it must have had from its
first day on.

Ok. The robot gets the sound wave and its low level modules try to recognize
known patterns in this wave.

First it recognizes a voice pattern.

This triggers a voice object. This triggers different objects. For example:
A speech object, an information object, a person object and perhaps a lot of
other objects.


The person object analyzes the sound wave only to obtain information who is
speaking. The speech object only tries to figure out what language is
spoken. But here is already a trick. The person object detects that the
voice comes from person Matt. And the person object has the value English
in its attribute language. The objects inform each other in parallel about
their values and the speech object receives the value English from the
person object. By this it is easier for the speech object to recognize the
language because it can use a useful hypothesis and it will activate certain
English tester objects. All these objects make their own analysis and use
information about results of other objects. 

After a short time, certain important objects are active:

A question object of the type quantity question.
Word objects of different grammar types with values 
How
Many
Apples
APPLIES
Are
On 
The
Tree

There is something special with the words APPLES and APPLIES.
They have the same number attribute value (=third word in the question) and
they have a probability value of 50%.
This means that the robot is not quite sure whether the third word was
APPLES or APPLIES.

The question object is already a higher level object. It does not use the
sound wave input but the set of active word objects.

The question object contains a subject object which itself contains a
GrammarSubject object and a GivenHints object. It has to decide whether the
subject is APPLES or TREE. 
The robot knows from past experience that subjects of quantity questions are
in plural. For any attribute of any object there is a setter method with a
learnable validate function. So the subject object accepts only the word
APPLES for its GRammarSubject object.

This fact also increases the probability value of the word APPLES and
decreases the probability for APPLIES.

Finally the robot has the complete question object which activates a goal
object: Answer the question!

This was just the low level. At this point the robot must understand what he
really shall do.

He knows from experience that he gets reward if it answers the active
question object whenever a corresponding goal object is active.

An answer for a quantity question must be a number.
The number is the result of a count process which corresponds to  the
subject of the quantity question.

Ok. We are in one of the medium levels of AGI. And I already wonder how our
robot should have learned the low level I described so far.
And I stop here because everything is too complex now.

But these thought experiments are strongly necessary if we want to create
AGI 






-Ursprüngliche Nachricht-
Von: Matt Mahoney [mailto:[EMAIL PROTECTED] 
Gesendet: Samstag, 3. Mai 2008 01:27
An: agi@v2.listbox.com
Betreff: [agi] Re: AW: Language learning

--- Dr. Matthias Heger [EMAIL PROTECTED] wrote:
 So the medium layers of AGI will be the most difficult layers.

I think if you try to integrate a structured or O-O knowledge base at
the top and a signal processing or neural perceptual/motor system at
the bottom, then you are right.  We can do a thought experiment to
estimate its cost.  Put a human in the middle and ask how much effort
or knowledge is required.  An example would be translating a low-level
natural language question to a high level query in SQL or Cycl or
whatever formal language the KB uses.

I think you can see that for a formal representation of common sense
knowledge, that the skill required

AW: AW: [agi] How general can be and should be AGI?

2008-05-01 Thread Dr. Matthias Heger


Charles D Hixson [mailto:[EMAIL PROTECTED] 




The two AGI modes that I believe people use are 1) mathematics and 2) 
experiment.  Note that both operate in restricted domains, but within 
those domains they *are* general.  (E.g., mathematics cannot generate 
it's own axioms, postulates, and rules of inference, but given them it 
is general.)  Because of the restricted domains, many problems can't 
even be addressed by either of them, so I suspect the presence of other 
AGI modes.  Possibly even slower and more expensive to use.

I suppose that one could quibble that since the modes I have identified 
are restricted to particular domains, that they aren't *general* 
intelligence modes, but as far as I can tell ALL modes of human thought 
only operate within restricted domains.




AGI which only operates in restricted domains is no AGI as I understand it
But it seems to be, that I use this term in a much stronger sense than most 
other people. I assume that most people understand AGI as human-like
intelligence which refers
especially to the repertoire of tasks which can be solved.

As I said, 'true AGI' does not use any bias. But any powerful intelligence
must use bias because 
real world state spaces are too complex for 'true AGI'.  So AGI as it is
used commonly is
only approximated and limited AGI but of course much broader than AI of the
present and the past.

I follow some other people that human-like AGI will probably be build by
several narrow AI algorithms that work together.

Humans uses and create object oriented descriptions of the world similar to
the paradigms of object oriented programming
Languages. This paradigm is very powerful in many domains because the inner
structure of these domains are in fact object-oriented. But this does not
hold in all domains. Among other advantages the object oriented paradigm
helps humans to make useful generalizations. For example: A television is a
electric appliance. Electric appliances need electric energy. So if there
will be some new electric appliance in the future I already know that it
will only work if it gets electric energy.

On the other hand, the object oriented paradigm is poor for recognition of
sounds and voices. 
We can hardly describe the voice of a person as a set of classes and objects
which have some properties, behavior and interact with each other. So the
object-oriented paradigm is an example for a very general paradigm but which
is not 100% useful in all domains.

And the brain has probably not a general monolithic algorithm which finds
regularities at all levels.
The recognition of regularities in sounds is surely not solved by the same
algorithm which learns that houses have windows.
The brain even changes its architecture to some degree during lifetime. A
baby brain has far more synapses than an adult brain. I think during the
first years humans extend their bias which they have from your genes. When
they are older, humans can solve many problems but they rely on the bias
they obtained during childhood and from their genes.

By the way, there is a nice analogy between the brain and the universe:
You can't explore your past processes of your own mind of the very first day
of your life because your brain changed its inner structure in the first
years too much and the algorithm for object oriented patterns  probably did
not yet exist at 
that time.
The whole universe also will not be able explore its past processes of the
big bang. At that time the inner structure changed. There were no atoms,
light was scattered always and everywhere. Therefore, we and any possible
machine of the universe can only see events  some 10 years after the big
bang when there were already atoms.  





---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


AW: [agi] How general can be and should be AGI?

2008-04-27 Thread Dr. Matthias Heger

 Ben Goertzel [mailto:[EMAIL PROTECTED] wrote 26. April 2008 19:54

 Yes, truly general AI is only possible in the case of infinite
 processing power, which is
 likely not physically realizable.   
 How much generality can be achieved with how much
 Processing power, is not yet known -- math hasn't advanced that far yet.


My point is not only that  'general intelligence without any limits' would
need infinite resources of time and memory.
This is trivial of course. What I wanted to say is that any intelligence has
to be narrow in a sense if it wants be powerful and useful. There must
always be strong assumptions of the world deep in any algorithm of useful
intelligence. 

Let me explain this point in more detail:

By useful and powerful intelligence I mean algorithms that do not need
resources which grow exponentially with state and action space.

Let's take the credit assignment problem of reinforcement learning.
The agent has several sensor inputs which builds the perceived state pace of
its environment.
So if the algorithm is truly general the state space grows exponentially
with the number of sensor inputs
and the number of time steps it considers of the past. Every pixel of the
eyes retina is a part of the 
state description if you are truly general. And every tiny detail of the
past may be important if you are
truly general. 
And even if you are less general and describe your environment not by pixels
but by words of common language
the state space is huge. 
For example, a state  description could be:

 ...I am in a kitchen. The door is open. It has two windows. There is a
sink. And three cupboards. Two chairs. A fly is on
the right window. The sun is shining. The color of the chair is... etc. etc.
...

Even this far less general state description would fill pages. 

So an AGI agent acts in huge state spaces and huge action spaces. It has
always to solve the credit assignment problem: Which action in which state
is responsible for the current outcome in the current situation. And which
action in which state will give me the best outcome? A truly general AI
algorithm without much predefined domain-knowledge and suitable
for arbitrary state spaces will have to explore the complete state-action
space which as I said grows exponentially with sensor inputs and time. 

I think, every useful intelligence algorithm must always avoid the pitfall
of exponential costs and the only way to do this is to be less general and
to give the agent more predefined domain knowledge (implicit or explicit,
symbolic or non-symbolic, procedural or non-procedural )
Even if you say Human level AI is able to generate its own state spaces.
Then there is still the problem that the initial sensory state space is of
exponentially extend.

So in every useful AGI algorithm there must be certain strong limits as
explicit or implicit rules how to represent the world initially and/or how
to generalize and build a world representation from experiences.

This means, that the only way to avoid the problem of exponentially growth
is to hard code implicit or explicit assumptions of the world. 
And these initial assumptions are the most important limits of any useful
intelligence. They are much more important than the restrictions of time and
memory. Because with these limits it will probably not be true anymore that
you can learn everything and solve any solvable problem if you only get
enough resources. The algorithm in itself must has fixed inner limits to be
something useful in real world domains. These limits cannot be overcome with
experience. 

Even an algorithm that guesses new algorithms and replaces itself if it can
prove that it has found something more useful than itself has fixed
statements that it cannot overcome. More important: If you want to make such
an algorithm practically useful you have to give it predefined rules how to
reduce the huge space of possible algorithms. And again these rules are the
more important problem than the lack of memory and space. 

One could argue that the algorithm can change these rules by own experience.
But you can only prove that changing the rules algorithmically enhances the
performance if the agent makes good experiences with the new rules. You
cannot prove that certain algorithms would not improve your performance if
you don't know the algorithms at all. Remember: The rules do not define a
certain state or algorithm but they define a reduction of the whole
algorithm space the agent can consider while trying to become more powerful.
The rules within the algorithm contain knowledge of that what the learning
agent does not know itself and cannot learn.
Even if you can learn to learn. And learn to learn to learn. And ...
Every recursive procedure has to have a non-reducible base and it is clear,
that the overall performance and abilities depend crucially on that basic
non-reducible procedure. If this procedure is too general, the performance
slows exponentially with the space with which this 

  1   2   >