Re: [agi] The Test

2008-02-08 Thread Richard Loosemore

Mike Tintner wrote:


Richard:  Consider yourself corrected:  many people realize the 
importance of

generalization (and related processes).

People go about it in very different ways, so some are more specific 
and up-front about it than others, but even the conventional-AI people 
(with whom I have many disagreements on other matters) realize the 
importance of it, and are trying to do something about it.


As for what "AGI systembuilders" are doing, you can take it from me 
that my own system is deeply rooted in the concept of generalization.



Richard,

We have another of your misreadings in haste here - something of a 
Q.E.D. misreading. I can do no better than requote my opening lines 
(please read carefully) :


"There's a simple misreading here. No way am I
saying nobody is looking at the problem! I am saying nobody is 
offering a solution! And none of the AGI systembuilders present or 
past have *acknowledged* that they haven't offered a solution - 
otherwise they wouldn't have made such large claims. And I am not 
aware of anyone even offering an equivalent of the General Test I 
just offered."


Like nearly everyone else, you are indeed looking at the problem, and 
even claiming again a solution, but have so far actually offered bupkes 
:) - cetainly in relation to the generalization problem/ test. And until 
you do, it remains to be seen whether you are even actually addressing 
the problem. I am suggesting - and I shall be delighted to eat my words 
- that this is the central one of what Mark Waser identified as the 
unacknowledged, "a-miracle-will-happen-here" holes in not just Ben's but 
everyone's project plans.  And indeed it is also the central reason why 
as Wozniak, pace Storrs Hall, more or less identified  -  computers & 
AGI's and, to some extent, robots are "Tommy's" (and then some) - deaf, 
dumb and blind quadriplegics, who while they may be extraordinary 
autistic savants, are still unable to deal with the real world.


Perhaps better to wait for my next post before replying.


I'm sorry, but I think this argument is losing coherence.

If you are complaining that no-one has solved the problem of 
generalization then you are (to coin a phrase) saying bupkes :-). 
According to that way of thinking, nobody has 'solved" anything until 
Delivery Day.


If, on the other hand, you are saying that someone has a part of their 
plans that belongs in the "a-miracle-will-happen-here" category (an dyou 
do indeed say this, no?), then you are saying that that person is 
ignoring it, trying to pretend they don't need it, not aware of the fact 
that it is missing-but-crucial, etc etc.  In a nutshell, they are not 
working on it, and they should be.


Those two types of critique are not the same.

Once again I am deeply confused about what you are criticising.  Perhaps 
the fault is mine, but when I read what you write, I get the feeling 
that the left hand paragraphs knoweth not what the right hand paragraphs 
sayeth...




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=93139505-4aa549


Re: [agi] The Test

2008-02-08 Thread Mike Tintner


Richard:  Consider yourself corrected:  many people realize the importance 
of

generalization (and related processes).

People go about it in very different ways, so some are more specific and 
up-front about it than others, but even the conventional-AI people (with 
whom I have many disagreements on other matters) realize the importance of 
it, and are trying to do something about it.


As for what "AGI systembuilders" are doing, you can take it from me that 
my own system is deeply rooted in the concept of generalization.



Richard,

We have another of your misreadings in haste here - something of a Q.E.D. 
misreading. I can do no better than requote my opening lines (please read 
carefully) :


"There's a simple misreading here. No way am I
saying nobody is looking at the problem! I am saying nobody is offering a 
solution! And none of the AGI systembuilders present or past have 
*acknowledged* that they haven't offered a solution - otherwise they 
wouldn't have made such large claims. And I am not aware of anyone even 
offering an equivalent of the General Test I just offered."


Like nearly everyone else, you are indeed looking at the problem, and even 
claiming again a solution, but have so far actually offered bupkes :) - 
cetainly in relation to the generalization problem/ test. And until you do, 
it remains to be seen whether you are even actually addressing the problem. 
I am suggesting - and I shall be delighted to eat my words - that this is 
the central one of what Mark Waser identified as the unacknowledged, 
"a-miracle-will-happen-here" holes in not just Ben's but everyone's project 
plans.  And indeed it is also the central reason why as Wozniak, pace Storrs 
Hall, more or less identified  -  computers & AGI's and, to some extent, 
robots are "Tommy's" (and then some) - deaf, dumb and blind quadriplegics, 
who while they may be extraordinary autistic savants, are still unable to 
deal with the real world.


Perhaps better to wait for my next post before replying.



Mike Tintner wrote:

Benjamin:When I read your
post, claiming that generalization is important, I think to myself
"yeah, that is what everybody else is saying and attempting to solve -- 
I even gave you several examples of how generalization could work", so I

then find myself surprised that you claim that nobody is looking at it!


Quick response for now. There's a simple misreading here. No way am I 
saying nobody is looking at the problem! I am saying nobody is offering a 
solution! And none of the AGI systembuilders present or past have 
*acknowledged* that they haven't offered a solution - otherwise they 
wouldn't have made such large claims. And I am not aware of anyone even 
offering an equivalent of the General Test I just offered. Yeah, it's an 
incredibly obvious test - almost a redefinition (although just a little 
more, too) of "Artificial GENERAL Intelligence." But you'd be amazed how 
often people ignore the obvious. Look also at how strenuously Ben 
objected when I suggested that his definition of intelligence as 
"achieving *complex* goals in complex environments" should be replaced by 
one focussing on the *general* aspect (for AGI), which is what he really 
seemed to mean in one passage,  (although in another text of his, the 
general aspect simply gets lost).


I do believe though - and I stand to be corrected - that nobody has fully 
identified the central importance of this problem - i.e. I agree with 
Joseph Gentle's:
"I think making a representation of the world which can be generalised 
and abstracted is the emergent crux of AGI".


Consider yourself corrected:  many people realize the importance of 
generalization (and related processes).


People go about it in very different ways, so some are more specific and 
up-front about it than others, but even the conventional-AI people (with 
whom I have many disagreements on other matters) realize the importance of 
it, and are trying to do something about it.


As for what "AGI systembuilders" are doing, you can take it from me that 
my own system is deeply rooted in the concept of generalization.



Richard Loosemore



Yes, it's still *emerging* AFAIK. If you want to correct me here, though, 
you'll have to quote some literature.


Yes, I'm developing & will set out a much larger argument here - later 
today/tomorrow. When I do, I think you'll see why people are, however 
subtly,  avoiding the problem, .[BTW I will want to attach a photo file - 
can one do that?]


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



--
No virus found in this incoming message.
Checked by AVG Free Edition. Version: 7.5.516 / Virus Database: 
269.19.21/1265 - Release Date: 2/7/2008 11:17 AM






-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=9

Re: [agi] The Test

2008-02-08 Thread Richard Loosemore

Mike Tintner wrote:

Benjamin:When I read your
post, claiming that generalization is important, I think to myself
"yeah, that is what everybody else is saying and attempting to solve -- 
I even gave you several examples of how generalization could work", so I

then find myself surprised that you claim that nobody is looking at it!


Quick response for now. There's a simple misreading here. No way am I 
saying nobody is looking at the problem! I am saying nobody is offering 
a solution! And none of the AGI systembuilders present or past have 
*acknowledged* that they haven't offered a solution - otherwise they 
wouldn't have made such large claims. And I am not aware of anyone even 
offering an equivalent of the General Test I just offered. Yeah, it's an 
incredibly obvious test - almost a redefinition (although just a little 
more, too) of "Artificial GENERAL Intelligence." But you'd be amazed how 
often people ignore the obvious. Look also at how strenuously Ben 
objected when I suggested that his definition of intelligence as 
"achieving *complex* goals in complex environments" should be replaced 
by one focussing on the *general* aspect (for AGI), which is what he 
really seemed to mean in one passage,  (although in another text of his, 
the general aspect simply gets lost).


I do believe though - and I stand to be corrected - that nobody has 
fully identified the central importance of this problem - i.e. I agree 
with Joseph Gentle's:
"I think making a representation of the world which can be generalised 
and abstracted is the emergent crux of AGI". 


Consider yourself corrected:  many people realize the importance of 
generalization (and related processes).


People go about it in very different ways, so some are more specific and 
up-front about it than others, but even the conventional-AI people (with 
whom I have many disagreements on other matters) realize the importance 
of it, and are trying to do something about it.


As for what "AGI systembuilders" are doing, you can take it from me that 
my own system is deeply rooted in the concept of generalization.



Richard Loosemore



Yes, it's still *emerging* 
AFAIK. If you want to correct me here, though, you'll have to quote some 
literature.


Yes, I'm developing & will set out a much larger argument here - later 
today/tomorrow. When I do, I think you'll see why people are, however 
subtly,  avoiding the problem, .[BTW I will want to attach a photo file 
- can one do that?]


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=93139505-4aa549


Re: [agi] The Test

2008-02-08 Thread Mike Tintner

Benjamin:When I read your
post, claiming that generalization is important, I think to myself
"yeah, that is what everybody else is saying and attempting to solve -- 
I even gave you several examples of how generalization could work", so I

then find myself surprised that you claim that nobody is looking at it!


Quick response for now. There's a simple misreading here. No way am I saying 
nobody is looking at the problem! I am saying nobody is offering a solution! 
And none of the AGI systembuilders present or past have *acknowledged* that 
they haven't offered a solution - otherwise they wouldn't have made such 
large claims. And I am not aware of anyone even offering an equivalent of 
the General Test I just offered. Yeah, it's an incredibly obvious test - 
almost a redefinition (although just a little more, too) of "Artificial 
GENERAL Intelligence." But you'd be amazed how often people ignore the 
obvious. Look also at how strenuously Ben objected when I suggested that his 
definition of intelligence as "achieving *complex* goals in complex 
environments" should be replaced by one focussing on the *general* aspect 
(for AGI), which is what he really seemed to mean in one passage,  (although 
in another text of his, the general aspect simply gets lost).


I do believe though - and I stand to be corrected - that nobody has fully 
identified the central importance of this problem - i.e. I agree with Joseph 
Gentle's:
"I think making a representation of the world which can be generalised and 
abstracted is the emergent crux of AGI". Yes, it's still *emerging* AFAIK. 
If you want to correct me here, though, you'll have to quote some 
literature.


Yes, I'm developing & will set out a much larger argument here - later 
today/tomorrow. When I do, I think you'll see why people are, however 
subtly,  avoiding the problem, .[BTW I will want to attach a photo file - 
can one do that?] 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=93139505-4aa549


Re: [agi] The Test

2008-02-08 Thread Vladimir Nesov
On Feb 8, 2008 7:12 AM, Benjamin Johnston <[EMAIL PROTECTED]> wrote:
>
> 4. If you're trying to develop your own argument, then I'd recommend
> taking a look at some of the more philosophical works in the research
> literature - not just in AGI but also in areas like embodied robotics,
> commonsense reasoning, cognitive science, qualitative reasoning and
> cognitive robotics. I personally found that writings on the symbol
> grounding problem were very helpful in clarifying a lot of my own
> thoughts (and in understanding how my own opinion relates to established
> positions). I'm sure there's something out there that would do the same
> for you, whether it be in the grounding problem (like me) or something
> completely different.

Ben,

Could you say a couple of words about specifics of what you found
helpful and in which writings? There are plenty, and I mainly found
them long-winded and unhelpful (although significant part of what I've
developed so far can be charachterised as 'philosophy' of how to build
an AGI, and it would have greatly helped if I could just read it).
Closest thing to inspiration-generating writings that I found is basic
cognitive science, and Hofstadter.

-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=93139505-4aa549


Re: [agi] The Test

2008-02-07 Thread Benjamin Johnston


Thankyou for another really constructive response - and I think that 
I, at any rate, am really starting to get somewhere. I didn't quite 
get to the nub of things with my last post. I think I can do a better 
job this time & develop the argument still more fully later.



Hi Mike,

I have five comments.


1. You seem to be using a more specific definition of AGI than I. I 
don't believe that all AGI work must necessarily focus on real-world 
embodiment. Don't you think it is possible to have an artificial general 
intelligence (such as an AGI info-bot) that inhabits a virtual symbolic 
world (such as a database); a world in which initial classification of 
objects is irrelevant to the agent?


I think AGI can encompass a range of different kinds of intelligences 
that inhabit not just real world environments, but also virtual 
environments, language-based environments or even purely formal symbolic 
environments. Some approaches might be better suited to particular 
environments.



2. I don't believe it it right to say that nobody is looking at 
generalization. I illustrated how generalization might be achieved 
automatically by a mutation operator in a GA biased towards 
generalization (for all instances of a given symbol, substitute it with 
a more general symbol), or how GA might be used to automatically acquire 
categorizations of abstract concepts from raw sensory features. 
Generalization lies at the very core of machine learning and AGI and 
there are plenty of formal and informal attempts to describe it.



3. It certainly is my own experience that I got into this area because I 
was intrigued by feelings that true intelligence is different from 
classical logical deduction or the standard kinds of machine learning 
algorithms. I suspect that most people here have felt (and still do) the 
same way, and it looks like you feel that way too.


When I look at various approaches, if I focus on the similarities 
instead of the differences, it strikes me that we're all attempting to 
attack the same deep issue from different perspectives. When I read your 
post, claiming that generalization is important, I think to myself 
"yeah, that is what everybody else is saying and attempting to solve -- 
I even gave you several examples of how generalization could work", so I 
then find myself surprised that you claim that nobody is looking at it!


I'll illustrate my point with fuzzy/uncertain logics, because you 
directly attacked them in a previous post...


My own initial reaction to modified logics that support "fuzzy" 
propositions was also that it didn't match my intuitions of how 
intelligence works - that they're "not even looking at the same 
fundamental problem". But, if - as you also say - the problem is that 
formal methods can't be used until after "you've classified the real 
world and predigested its irregularities into nice neat regular units", 
then I realize that maybe this fuzzy approach really does make sense: 
they're trying to use the universality of logic but they're also trying 
to skip over the need for "nice neat regular units" by letting the logic 
natively accept "ugly messy irregular units". You might not buy this 
particular reasoning and so you may need to find one of your own that 
maps their objectives to your own view of the challenge of intelligence; 
but I think you will find that with an open mind, you really will start 
seeing that there are connections to your own ideas. That is, everybody 
does have some kind of grasp on the same fundamental problem, but 
they're just looking at it from different angles.


When you start to formulate your ideas into a coherent argument (that 
doesn't use vague words like "structured" without definition), you might 
then start forming your own ideas of how to approach the problem. 
Hypothetically... you might reason that generalization is fundamental, 
so you could (again, hypothetically) start off by experimenting with a 
translation of this abstract idea into a concrete computational model 
where self-modifying programs can take their own subroutines and 
automatically search for generalizations of those subroutines (and maybe 
you also have another process of hierarchical learning to discover "x is 
a generalization of y" patterns). At that point you'll have your own AGI 
system building program, at then maybe you'll come across somebody else 
who sees what you are doing, who ignores the background work that got 
you there and your long term vision of where you want to go with it, but 
simply claims "hey, no, intelligence isn't self-modifying subroutine 
abstraction, duh! why don't you come up with a crux idea?".



4. If you're trying to develop your own argument, then I'd recommend 
taking a look at some of the more philosophical works in the research 
literature - not just in AGI but also in areas like embodied robotics, 
commonsense reasoning, cognitive science, qualitative reasoning 

Re: [agi] The Test

2008-02-06 Thread Joseph Gentle
On Feb 7, 2008 11:53 AM, Mike Tintner <[EMAIL PROTECTED]> wrote:
> And I think  it's clear,  if only in a very broad way,  how the human mind
> achieves this (which I'll expound in more detail another time) - it has what
> you could call a "general activity language" - and learns every skill *as an
> example of a general activity*. Every particular skill is learned in a
> broad, general way in terms of concepts that can be and are applied to all
> skills/ activities (as well as more skill-specific terminology). Very
> general, "modular" concepts.

My approach to the whole problem of AGI is to think "What is the hard
bit, that noone has really gotten right at all?". I completely agree
with the problem you pose here - I think making a representation of
the world which can be generalised and abstracted is the emergent crux
of AGI. Neural networks and other machine learning methods like
decision trees don't have representations which support the kind of
operation we're talking about.

How can we solve this? I think it requires making a really simple
associative language of sorts. We need something in which I can
trivially represent things like:

"A is somehow related to B"
"A and B have some common properties. I will call these common
properties P and make A and B specialisations of P"
"C is similar to A and B somehow. C might have the properties of P."

If the representation just makes a graph of links between things
(/objects/concepts/cortical columns) then finding the common links
between two objects doesn't actually seem that hard a problem. I
think.

-J

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=94603346-a08d2f


Re: [agi] The Test

2008-02-06 Thread Mike Tintner

Benjamin,

Thankyou for another really constructive response - and I think that I, at 
any rate, am really starting to get somewhere. I didn't quite get to the nub 
of things with my last post. I think I can do a better job this time & 
develop the argument still more fully later.


Why are those ideas not crux ideas - those schools of programming not true 
AGI? You almost hit the nail on the head with:


"My point is, however, that general purpose reasoning is possible -
I think there are plenty of signs of how it might actually work."

i.e. none of those approaches actually show true "general purpose 
reasoning," you only hope and believe that some new ones will in the future 
(and have some good suggestions about how).


What all those schools lack, to be a bit more precise,  is an explicit 
"generalizing procedure" -   let's call it a "real world generalization 
procedure." They don't tell you directly how they are going to generalize 
across domains - how, having learnt one skill, they can move on to another. 
The GA's, if I've understood,  didn't generalize their skills - didn't 
recognize that they could adapt their walking skills to water - their 
minders did. An explicit real-world generalization procedure must tell you 
how the system itself is going to recognize an unfamiliar domain as related 
to the familiar one(s). How the lego construction system will recognize 
irregular-shaped rocks as belonging to a larger class that includes the lego 
bricks. How Ben's pet who, say, knows all about navigating neat, flat office 
buildings will be able to recognize a very different, messy bomb site or 
rocky hillside as nevertheless all examples of  navigable.terrains. How a 
football playing robot will recognize other games such as rugby, hockey etc 
as examples of "ball games [I may be able to play]". How in other words the 
AGI system will recognize unfamiliar (and not obviously classifiable) 
problems as having something in common with familiar ones. And how those 
systems will have general ways of adapting their skills/ solutions. How the 
lego system will adapt its bricklaying movements to form rocklaying 
movements, or the soccer player will adapt its arm and leg movements to 
rugby.


I think you'll find that all the schools of programming only wave at this... 
they don't offer you an explicit method. I'll take a bet, for example, that 
Ben G cannot provide you with even a virtual world generalization procedure. 
The AGI systems/agents, it must be stressed, have to be able to recognize 
*independently* that they can move on to new domains - even though they will 
of course also need to seek help to learn the rules etc, as we humans do.


And I think  it's clear,  if only in a very broad way,  how the human mind 
achieves this (which I'll expound in more detail another time) - it has what 
you could call a "general activity language" - and learns every skill *as an 
example of a general activity*. Every particular skill is learned in a 
broad, general way in terms of concepts that can be and are applied to all 
skills/ activities (as well as more skill-specific terminology). Very 
general, "modular" concepts.


But such human powers of generalization are still way, way beyond current 
computers. The human mind's ability to cross domains is dependent on the 
ability, for example,  to generalize from something as concrete as "taking 
steps across a field"  to something as abstract as "taking steps to solve a 
problem in philosophy or formal logic".


And the reason that I classify all this as *real world* generalization is 
that it cannot be achieved by logic or mathematics, which is what all the 
schools you mention depend on, (no?) They can't help you classify the bricks 
and rocks as alike, or rugby as like football, or a rocky bomb site as like 
an office floor, let alone steps across a field as like steps in an 
argument. They can only be brought into play *after* you've classified the 
real world and predigested its irregularities into nice neat regular units 
that they can operate on. That initial classification/ generalization 
requires the general skill that is still beyond all AGI's - and actually, I 
think, doesn't even have a name.




bENJAMIN: mt:>> I think your approach here *is* representative - &,  as  you 
indicate,
the details of different approaches to AGI in this discussion,  aren't 
that important. What is common IMO to your and the thinking of others 
here is that you all start by asking yourselves : what kinds of 
programming will solve AGI? Because programming is what interests you 
most and is your life.



Actually, that isn't necessarily accurate. I'm currently collaborating 
with a cognitive scientist, and I've seen other people here hint at 
drawing their own inspiration from cognitive science and other 
non-programming disciplines.


I reason the problem like this:
1. I know intelligence is possible, by looking at the animal kingdom.
2. I don't believe that the animal k

Re: [agi] The Test

2008-02-06 Thread Richard Loosemore

Benjamin Johnston wrote:


Very briefly, my focus a while back in attacking programs was not on 
the sign/ semiotic - and more particularly, symbolic -  form of 
programs, although that is v. important too.


My focus was on the *structure* of programs - that's what they are: 
structured and usually sequenced sets of instructions.No matter how 
sophisticated their structure, and/or their capacity to adapt their 
structure, they are still structured.



I'm unclear what you mean by structure.

Interpretaton 1:
-
Every program in a modern computer language is a structured and 
sequenced set of instructions. It isn't possible to write an unsequenced 
set of instructions, because the language itself imposes that structure.


If structured programs cannot be intelligent, then if I understand you 
correctly, it follows that what you are saying is that it is 
*impossible* to write intelligent systems in modern computer programming 
languages. Given that modern computer languages are Turing complete 
(modulo space and time limitations), your claims would therefore be 
equivalent to saying that intelligence is not computable.


Interpretation 2:
-
May be you mean something a little stronger by structure? That the way 
that human beings engineer software is very structured, and software 
that has been engineered by humans with that kind of structure cannot 
possibly solve unstructured problems.


Do you think, then, that it is possible for a human to write a 
structured program that generates unstructured programs that have 
general intelligence?


Ben,

I feel compelled to help out here, because (as I said in my post to 
Mike), he is using words in a way that causes confusion ... and since 
Mike and I have had the same conversation/debate at least twice before, 
it might help if I explain what I have already understood from those 
previous conversations.  The key thing s that he does not mean 
"structured" in any of the senses that most others would use the term.


What Mike is trying to say is that he has great objections to the style 
of Artificial Intelligence system in which the intelligence process is 
supposed to be very narrowly rule-governed, with simple symbols (no 
internal structure to the symbols) and very deterministic processing. 
Unfortunately, he often uses the word "program" to describe this, 
although he has now also called it "structured".  I would tend to call 
that approach to AI something like "simple, logical symbol-processing", 
or some such term.


Other people would make the same distinction between different types of 
AI, but use different language.  What Mike is demanding is that people 
recognize the limitations of that style of AI, and move to something 
that allows for fluidity, creativity, unpredictability 
(non-deterministic reasoning?), and perhaps most important of all, some 
degree of emergence.


In my previous debates with him I have tried to explain that there are 
many, many people who already accept the limitations of simple, logical 
symbol-processing, and that approaches such as genetic algorithms, 
neural nets, the FARG-type systems of the Hofstadter school, and also my 
own "molecular" approach (closely related to Hofstadter's), all have at 
least some of teh characteristics that he is asking for.


In particular, I have stresed that there is no black and white 
distinction between systems that are rigid (in the way that he complains 
of) and systems that are fluid and unpredictable (in the way that he 
prefers), but rather there is a continuum of types.  And even more 
important, "programs" are completely neutral on this score:  you can use 
"programs" to build systems that are rigid or systems that are labile.


Mike: I know you do not accept this analysis of your position, but I 
believe that whenever you try to explain your position, it always come 
out as equivalent to this.





Richard Loosemore





-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=94169430-374467


Re: [agi] The Test

2008-02-05 Thread Benjamin Johnston


Very briefly, my focus a while back in attacking programs was not on 
the sign/ semiotic - and more particularly, symbolic -  form of 
programs, although that is v. important too.


My focus was on the *structure* of programs - that's what they are: 
structured and usually sequenced sets of instructions.No matter how 
sophisticated their structure, and/or their capacity to adapt their 
structure, they are still structured.



I'm unclear what you mean by structure.

Interpretaton 1:
-
Every program in a modern computer language is a structured and 
sequenced set of instructions. It isn't possible to write an unsequenced 
set of instructions, because the language itself imposes that structure.


If structured programs cannot be intelligent, then if I understand you 
correctly, it follows that what you are saying is that it is 
*impossible* to write intelligent systems in modern computer programming 
languages. Given that modern computer languages are Turing complete 
(modulo space and time limitations), your claims would therefore be 
equivalent to saying that intelligence is not computable.


Interpretation 2:
-
May be you mean something a little stronger by structure? That the way 
that human beings engineer software is very structured, and software 
that has been engineered by humans with that kind of structure cannot 
possibly solve unstructured problems.


Do you think, then, that it is possible for a human to write a 
structured program that generates unstructured programs that have 
general intelligence?


-

-Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=94090141-147bec


Re: [agi] The Test

2008-02-05 Thread Benjamin Johnston


I think your approach here *is* representative - &,  as  you indicate, 
the details of different approaches to AGI in this discussion,  aren't 
that important. What is common IMO to your and the thinking of others 
here is that you all start by asking yourselves : what kinds of 
programming will solve AGI? Because programming is what interests you 
most and is your life.



Actually, that isn't necessarily accurate. I'm currently collaborating 
with a cognitive scientist, and I've seen other people here hint at 
drawing their own inspiration from cognitive science and other 
non-programming disciplines.


I reason the problem like this:
1. I know intelligence is possible, by looking at the animal kingdom.
2. I don't believe that the animal kingdom is doing something that is 
formally uncomputable (i.e., intelligence is computable).
3. I can see the things that intelligence can do, and have ideas about 
how it may work.
4. I recognize that biological computing machinery is vastly different 
to artificial computing machinery.
5. I assume that it is possible to build intelligence on current 
artificial computing machinery (i.e., intelligence is computable on 
current computers)
5. So, my goal is to translate those ideas about intelligence to the 
hardware that we have available.


Programming comes into it not because we are obsessed with programming, 
but because we have to make do with the computing machinery that is 
available to us. We're attempting to exploit the strengths of computing 
machinery (such as its ability to do fast search and precise logical 
deduction) to make up for the weaknesses of the machinery (such as the 
difficulty in analogizing or associative learning). I don't believe 
there is only one path to intelligence, and we must be very conscious of 
the platform that we are building on.


What you have to do in order to produce a true, crux idea, I suggest, 
is not just define your approach but APPLY IT TO A PROBLEM EXAMPLE OR 
TWO of general intelligence - show how it might actually work.



Well, that is what many of us are doing. We have these plausible crux 
ideas, and we're now attempting to apply it to problems of general 
intelligence. It takes time to build systems, and the more ambitious the 
demonstration the longer it takes to build. I have my own challenge 
problems in the pipeline (I have to start very small, and have been 
using the commonsense problem page*), and I know most serious groups 
involved in system building have their own problems too.


* http://www-formal.stanford.edu/leora/commonsense/

I've mentioned Semantic Web reasoning and General Game Playing. Even 
something like the Weka toolkit could be seen as a kind of general 
intelligence - you can run their machine learning algorithms on any kind 
of dataset and it will discover novel patterns. I admit that those are 
weak examples from an AGI perspective because they are purely symbolic 
domains, but it seems that AGI comes in where those kind of examples 
end. My point is, however, that general purpose reasoning is possible - 
I think there are plenty of signs of how it might actually work.


You have to show how, for example, your GA might enable your 
lego-constructing system to solve an unfamiliar problem about building 
a dam of rocks in water. You must show that even though it had only 
learned about regularly-shaped bricks, it could neverthless recognize 
irregularly-shaped rocks as, say, "building blocks"; and even though 
it had only learned to build on solid ground, it could nevertheless 
proceed to build on ground submerged in water. [I think BTW, when you 
try to do this, you will find that GA's *won't* work]



Why not?

Genetic algorithms have been used in robots that learn how to move. You 
can connect a GA up to a set of motors and set up the algorithm so that 
movement is rewarded. Attach the motors to legs and put it on land, and 
the robot will eventually learn that walking maximizes its goals. Put 
the motors into fins and a tail and put it in water, and the robot will 
eventually learn that swimming maximizes its goals. Isn't this a perfect 
example of how GAs can problem-solve across domains?


Or to address your specific (but more challenging) problem directly...

Lets say, instead, that we're using GAs to generate high level 
strategies, plans and reasoning... the GA may evolve, on land, some 
wall-building strategies:

1. Start with the base
2. Put lego blocks on top of other lego blocks
3. Make sure lego blocks are stacked at an even height
4. Make sure there are no gaps

When we give the robot the goal of building a dam, and it may then take 
those existing strategies and evolve generalizations:

Here's one:
1. Start with the base
2. Put things on top of other things
3. Make sure things are stacked at an even height
4. Make sure there are no gaps
This could happen by a cross-over or mutation that generalizes 
categories (Lego block -> Thing) -- and it may be the case that an 
AGI-op

Re: [agi] The Test

2008-02-05 Thread Mike Tintner

Benjamin [as in Johnston :)],

Thankyou for a detailed response which is totally constructive. (An uncommon 
thing and I appreciate it).  And therefore v. helpful.


It's helps me understand how you & others think. I can see more clearly why 
you believe  - reasonably from your POV - that crux ideas have been offered. 
I hope I can show you why they're not really crux ideas.


I think your approach here *is* representative - &,  as  you indicate, the 
details of different approaches to AGI in this discussion,  aren't that 
important. What is common IMO to your and the thinking of others here is 
that you all start by asking yourselves : what kinds of programming will 
solve AGI? Because programming is what interests you most and is your life.


And in assessing the value of different approaches, you reason logically, as 
you do, for example, about GA's:


"If you have a "genetic language" that is sufficiently general, and infinite

computing power, then a good genetic algorithm can eventually solve any
computable problem."


Well, put like that, how can GA's fail? Even if you take a more specific 
logical formulation like - (loosely off the top of my head) - "GA's can mix 
a given set of elements any which way to arrive at  new, unforeseen 
approaches to any problem" - it can still sound good, as if it might solve 
AGI.


However, logical reasoning proves nothing - and can be just as easily used 
to "disprove" all these approaches, as indeed it has been.


What you have to do in order to produce a true, crux idea, I suggest, is not 
just define your approach but APPLY IT TO A PROBLEM EXAMPLE OR TWO of 
general intelligence - show how it might actually work.


You have to show how, for example, your GA might enable your 
lego-constructing system to solve an unfamiliar problem about building a dam 
of rocks in water. You must show that even though it had only learned about 
regularly-shaped bricks, it could neverthless recognize irregularly-shaped 
rocks as, say, "building blocks"; and even though it had only learned to 
build on solid ground, it could nevertheless proceed to build on ground 
submerged in water. [I think BTW, when you try to do this, you will find 
that GA's *won't* work]


You don't just have to tell me in general terms what your programming 
approach can do, you have to apply it to specific true AGI END-PROBLEMS - 
and invite additional tests.


I suggest you look again at any of the approaches you mention, as formally 
outlined, and I suggest you will not find a single one, that is actually 
applied to an end-problem, to a true test of its AGI domain-crossing 
potential. And I think if you go through the archives here you also won't 
find a single attempt in relevant discussions to do likewise. On the 
contrary, end-problems are shunned like the plague.


(And you see yet another example of this general philosophy in Arthur 
Murray's recent formulation of his system/approach - no attempt to apply it 
to a general intelligence end-problem, only the non-AGI problems that he has 
carefully selected. Happens again and again. Yet another reason why that 
"General Test" is so important).


Without application to AGI problem examples,  you don't have crux ideas, you 
only have "hand-waving around the problem"  And I quote an eloquent post 
from a Slashdot discussion of the McKinstry/Singh suicides - which 
underlines my points - it testifies to the long history of different AI/AGI 
schools of programming, which all,  I suggest, were never really applied to 
AGI end-problems, or a true AGI test, as they should have been from the very 
beginning. The post also offers hope because it shows that when you really 
pressure AI/AGI-ers to apply themselves to end-problems, as with DARPA, you 
start to get real results - but you do really have to pressure. (I 
appreciate DARPA's AGI status is debatable):


"""It's discouraging reading this. Especially since I knew some of the Cyc 
[cyc.com] people back in the 1980s, when they were pursuing the same idea. 
They're still at it. You can even train their system [cyc.com] if you like. 
But after twenty years of their claiming "Strong AI, Real Soon Now", it's 
probably not happening.


I went through Stanford CS back when it was just becoming clear that "expert 
systems" were really rather dumb and weren't going to get smarter. Most of 
the AI faculty was in denial about that. Very discouraging. The "AI Winter" 
followed; all the startups went bust, most of the research projects ended, 
and there was a big empty room of cubicles labeled "Knowledge Systems 
Laboratory" on the second floor of the Gates Building. I still wonder what 
happened to the people who got degrees in "Knowledge Engineering". "Do you 
want fries with that?"


MIT went into a phase where Rod Brooks took over the AI Lab and put 
everybody on little dumb robots, at roughly the Lego Mindstorms level. 
Minsky bitched that all the students were soldering instead of learning 
theory. After a deca

Re: [agi] The Test

2008-02-05 Thread Mike Tintner

Richard,:Mike,


When you say "I just believe that our thinking works on different 
mechanistic/ computational principles to those of programs" ... What you 
are really trying to say is that intelligence is not captured by a certain 
type of rigid, pure symbol-processing AI.  The key phrase is 
"symbol-processing", which has connotations a certain approach to the 
representation of knowledge


Richard,

Thankyou for a sympathetic response, but I suggest - in a well-meaning way - 
that it would be worth your while giving me credit, if only provisionally, 
for a little more intelligence and awareness than you do.


Very briefly, my focus a while back in attacking programs was not on the 
sign/ semiotic - and more particularly, symbolic -  form of programs, 
although that is v. important too.


My focus was on the *structure* of programs - that's what they are: 
structured and usually sequenced sets of instructions.No matter how 
sophisticated their structure, and/or their capacity to adapt their 
structure, they are still structured.


So what I am saying - v. loosely for the moment - is that you *cannot* 
employ a programmed/ *structured* approach to *ill-structured* problems - 
and there isn't any evidence that humans actually do, or that AGI's can 
successfully. Hence it was that the great Herbert Simon himself 
distinguished between "programmed" and "NONPROGRAMMED" decisions - his term, 
which still obtains to this day in management science, and is not about 
symbol-processing. And ill-structured problems, I suggest, are the stuff of 
AGI.


As I said, I will set out one last, v. different and systematic presentation 
of this POV in a while, which people can ignore or not  - I did not mean to 
reignite the argument now, and there's no need to comment for the moment.


P.S. Here's one analogy and also much-more-than-analogy of what I am talking 
about. As I said in singularity, the new genetics of Venter & co is changing 
everything, and will change the way we think about programs too. It's 
fundamentally changing paradigms. One way that it's doing this, (which I 
didn't mention),  is making us think in terms of "self-assembling" genomes. 
Now clearly "self-assembly" is a totally different paradigm for thinking 
about everything - a paradigm we haven't even begun to master. We don't know 
how to create self-assembling machines, only ones pre-assembled according to 
a rigid blueprint.,(although we are starting) - and its' pre-assembled 
machines that have shaped science's entire view of the world.  Nature 
mastered self-assembly  long ago - with life. But it didn't, I suggest, just 
master self-assembling *forms*, it mastered self-assembling *behaviour*. 
Computers currently are only capable of programs - pre-assembled behaviour 
which must follow a structured blueprint. Human courses of action are by 
contrast, self-assembled, as they happen - still  more so than biological 
forms - examples of  "making it up as you go along" without any structured 
blueprint. That's what your post to me was. That's what the next minute of 
your life and every minute after that will be. And that's what AGI's will 
need to succeed and survive. Later.




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=93872219-b642cb


Re: [agi] The Test

2008-02-05 Thread Richard Loosemore

Mike Tintner wrote:

I believe we are 
thinking machines and not in any way magical. I just believe that our 
thinking works on different mechanistic/ computational principles to 
those of programs - which someone apart from me, surely should at least 
question.  It has to be a serious *possibility* that programs equal 
narrow AI, and are the wrong paradigm for AGI.


Mike,

You are repeating a statement that you have made before (and which I 
have addressed before), and this is just going to cause great confusion 
again.


When you say "I just believe that our thinking works on different 
mechanistic/ computational principles to those of programs" you are 
using the word "programs" in a misleading way.


"Programs" in general are capable of implementing any type of AI 
whatsoever, ranging from the most stupid-brained AI that you hate, to 
the most flexible, creative, unpredictable (etc) AI that you would like 
to see.


What you are really trying to say is that intelligence is not captured 
by a certain type of rigid, pure symbol-processing AI.  The key phrase 
is "symbol-processing", which has connotations a certain approach to the 
representation of knowledge.


The way you phrase your position, you look like one of the "computers 
cannot do intelligence because intelligence is not COMPUTATION" crowd. 
These people believe that there is something magical and 
non-computational about thought.


You are not the first person to complain about the problems associated 
with the narrow symbol-processing approach, not by a long way:  many of 
the experts on this list already bought that message decades ago.


So:  I already agree that intelligence is not going to happen that way! 
 But every time you say "intelligence is more than just programs" I can 
only shake my head and watch while many other people on this list take 
your words the wrong way and a huge, pointless debate kicks off again.




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=93829695-4b54c3


Re: [agi] The Test

2008-02-05 Thread wannabe

Benjamin Johnston wrote, among other things:


I like to think about Deep Blue a lot. Prior to Deep Blue, I'm sure
that there were people who, like you, complained that nobody has
offered a "crux" idea that could make truly intelligent computer chess
system. In the end Deep Blue appeared to win largely by brute force
computing power. What I find most interesting is that Kasparov didn't
say he was beaten by a particularly strong computer chess system, but
claimed to see deep intelligence and creativity in the machine's play.
That is, he didn't think Deep Blue was merely a slightly better version
than the other chess systems, but he felt it had something else. He was
surprised by the way the machine was playing, and even accussed the IBM
team of cheating.


You know, this gets me thinking that may the idea of intelligence is  
misleading.  Maybe it's not really something like power or strength  
that is objective, but something more like deliciousness, that exists  
only as something we say about something else and isn't really a  
characteristic of the object.

andi

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=93812712-fac443

Re: [agi] The Test

2008-02-05 Thread Joseph Gentle
On Feb 5, 2008 11:36 PM, Benjamin Johnston <[EMAIL PROTECTED]> wrote:
> Well, as I said before, I don't know which will directly produce general
> intelligence and which of them will fail.



> My point, again, is that we don't know how the first successful AGI will
> work - but we can see many plausible ideas that are being pursued in the
> hope of creating something powerful. Some of these are doomed to fail; but
> we don't really know which ones they are until we try them. It doesn't seem
> fair for you to say that nobody has offered a "crux" idea, and I'd prefer
> that people follow their passions rather than insist that everybody should
> get hung up on the centuries/millennia old question of what exactly is
> intelligence.

Thankyou. Your list is very informative. I think its worth mentioning
the dangerous phenomenon you touched on here. For some reason, people
get religious about their approaches. "No, my idea is better. I can't
prove why yet, but it'll work."  The problem with this line of
reasoning (as we've all experienced) is it ends with "Lets just not
argue about which approach is better."  I think we all agree that some
approaches _are_ better than others. We might not agree on which ones
are which, but I don't want to run away from that discussion. You
mentioned passion -- I'm passionate about solving strong AI, not about
pursuing my ideas even if they're wrong. I don't think any of us want
to waste our time working on a flawed idea because nobody told us.

The other reason I think discussing this stuff is worthwhile is thus:

I think eventually what we all want is the same. We want a machine
into which we plug a reward function and maybe a webcam or something
and then we can teach it to talk and think. Thats what I imagine
anyway. Maybe any of the methods you talked about could be used to
make that. Its like we're dreaming of inventing computers while we
work on our different CPU designs. I want to talk about how the
computer will fit together. If the cpu is the hard bit, I want as good
a spec as possible, and that means knowing and discussing the
infrastructure. I don't want to work on something which could never
actually power an intelligent system because I didn't think big
picture. Thats a real danger.

If its true, tell me that I'm in the wrong forest.

-J

> -Benjamin Johnston
>
>
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=93801513-06c777


Re: [agi] The Test

2008-02-05 Thread Joseph Gentle
On Feb 4, 2008 11:42 PM, Mike Tintner <[EMAIL PROTECTED]> wrote:
> The test, I suggest, is essentially; not the Turing Test or anything like
> that but "The General Test." If your system is an AGI, or has AGI potential,
> then it must first of all have a skill and be able to solve problems in a
> given doman. The "test" is then: can it a) independently learn a skill in an
> adjacent domain, and/or  b) pass a problemsolving test in an adjacent domain
> (to be set by someone other than the systembuilder!). If it can play soccer,
> can it learn how to play rugby and solve problems in rugby? If it can build
> Lego constructions, can it learn to build a machine? If it can search for
> hidden items,  can it learn to play hide-and-seek? The General Test then is
> simply a test of whether a system can generalize its skill(s). If it knows
> how to put together a set of elements in certain kinds of ways, can it then
> learn to put those same elements together [and perhaps some new ones] in new
> kinds of ways?


Interesting test. However, as others have mentioned it is a difficult
test to evaluate in practice. Here's my proposal:

I propose that the purpose of any 'intelligent' system / agent is to
pursue goals. These goals can be specified in any way; 'classical'
victory condition-type goals, maintainance goals, whatever. An
intelligent system should be evaluated based on how well it can pursue
goals.

In particular, the quality of an intelligent system should be
evaluated based on:
- Optimality. An intelligence should try to optimize its goal-seeking
behavior for maximum 'reward'.
- Adaptability. If the world changes and makes different strategies
optimal, the system should account for this in its behaviour.
- Generality. A general intelligence should be able to 'solve' as wide
a range of goals or subgoals as possible.


Clearly humans are classified intelligent with this metric. Dogs are
still intelligent, but less intelligent. They have much less
generality in the goals they can solve. They are also less adaptable
('creative') than people.

Is a washing machine intelligent? It certainly minimally fits the
'intelligence' criteria of being able to solve a goal. The goal of my
washing machine is to make it easy for me to wash my clothes. Does it
do this optimally? No. Is it adaptable? Not really. Can it solve any
other goals? No.

Perhaps to be flagged 'intelligent' some minimal benchmark in
optimality, adaptability and generality is required. This is not the
interesting end of the scale.


> That's what people should be doing here centrally - discussing and
> exchanging their ideas about how to solve the General Test. The fact that no
> one is discussing this (despite vast volumes of overall discussion) suggests
> very powerfully that no one *has* an idea.

I think solutions are easy. Asking the right question is hard. Here's
my favorites:

"What kind of information does a general intelligence need to store
and manipulate?"
"What are the most fundamental elements of information you need to
store? What requirements are there on these pieces of information? How
can they be combined?"
"What is the simplest goal an intelligent system could possibly learn to solve?"
"What features must a good intelligent system have? If you were
writing a software engineering spec for an AI, what would it look
like?"


... anyone?

-J

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=93782378-41d2ce


Mindforth and the Wright Brothers ... [WAS Re: [agi] The Test]

2008-02-05 Thread Richard Loosemore

A. T. Murray wrote:
Mike Tintner wrote in the message archived at 
http://www.mail-archive.com/agi@v2.listbox.com/msg09744.html 


[...]
The first thing is that you need a definition 
of the problem, and therefore a test of AGI. 
And there is nothing even agreed about that - 
although I think most people know what is required. 
This was evident in Richard's recent response to 
ATMurray's recent declaring of his "Agi" system. 
Richard clearly knew pretty well why that system 
failed the AGI "test" but he didn't have an explicit 
definition of the test at his fingertips.


Richard Loosemore "clearly knew pretty well" nothing
of the sort. His was a lazy man's response. He did not 
download and experiment with the MindForth program at

http://mentifex.virtualentity.com/mind4th.html and
http://mind.sourceforge.net/mind4th.html -- he only
made a few generalizations about what he lazily
_thought_ MindForth might be doing. In the archive
http://www.mail-archive.com/agi@v2.listbox.com/msg09674.html
Richard Loosemore vaguely compares sophisticated
MindForth with the canned-reponse "Eliza" program --
which nobody ever claimed was an artificial intelligence.

Richard Loosemore furthermore suggested that all of 
the cognitive processes in the Eysenck & Keane textbook

of Cognitive Psychology would have to be implemented
in MindForth before it could be said to have achieved
True AI functionality. That demand is like telling
Wilbur and Orville Wright that they have to demo
a transatlantic French Concorde jet before they may 
claim to have achieved "true airplane functionality."

> [snip]

Well, I have had some people get mad at me before, but not when I was 
being so ... charming.


Arthur, if there is an analogy between Mindforth and the Wright 
Brothers, then you, alas, are just standing on the sand at Kitty Hawk, 
waving your hands up and down and shouting "I can flap!  I can flap!".


You don't have to build Concorde at the first attempt, you just have to 
get your plane off the ground and show that it can travel any distance 
at all under its own power.


I assumed that your own description of what Mindforth did was accurate 
(it was, wasn't it?) and on that basis I saw it merely flapping its 
wings in the same way that Eliza did 30 years ago.





Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=93769008-df05ed


RE: [agi] The Test

2008-02-05 Thread Benjamin Johnston

> Fine. Which idea of anyone's do you believe will directly produce 
> general intelligence  - i.e. will enable an AGI to solve problems in 
> new unfamiliar domains, and pass the general test I outlined?  (And 
> everyone surely agrees, regardless of the test, that an AGI must have 
> "general" intelligence).

Well, as I said before, I don't know which will directly produce general
intelligence and which of them will fail.

I have my own theories about which approaches are more likely to succeed
than others, and about which approaches are fundamentally wrong. However,
most serious ideas seem to have a plausible story, and I'm not ready to
completely rule out any serious idea until it is proven wrong. 

I'll briefly discuss some ideas below. You may not agree with my
interpretation of the approaches, and you may not fully agree with my
argument about why they're plausible... but I think that you surely have to
agree that a plausible argument can be made for most of this research, and
it is clear that the people conducting the research can see themselves as
addressing the crucial questions. 

That is, while I'm not the right person to be arguing the details of these
approaches, I'm confident that many researchers here wouldn't be devoting
their time to their research if they didn't see a coherent picture for how
their work fits into the grand scheme of AGI.

Many apologies to other readers if I've not included your preferred approach
or have misrepresented/misinterpreted your ideas. I've just taken a quick
and informal sample here. The details aren't as important as the overall
message.

Logic
---
An automated theorem prover is an extremely general purpose intelligence.
Consider, for example, how logics may be adapted to many different domains
on the Semantic Web or the increasing strength of competitors in General
Game Playing competitions (surely not long before they're better than the
average human at any novel game?). As to whether logic can be applied to
general purpose embodied intelligent systems remains to be seen - I think
the symbol grounding problem points towards logic not being enough - but
researchers looking into logics with uncertainty or logics that incorporate
iconic representations are effectively exploring a possible "solution" to
the symbol grounding problem.

In other words, these researchers are saying "Logical deduction offers true
'general intelligence' in symbolic domains, and we're trying to adapt that
intelligence to real life situations": a plausible crux idea and worth
pursuing.


Hybrid Systems
---
If we just keep doing what we're doing in "Narrow AI", but look at combining
many components into a coherent architecture then it seems plausible that
we'll eventually end up with a system that is indistinguishable from an
ideal general intelligence. It may not be an elegant answer, but it may be
an answer. This gives good reason to pursue integration.

Consider for example, problems like the DARPA Grand Challenges. In current
systems, obstacles may be specifically identified against a hand-coded
database. In the next generations, these representations might become more
generic and learnt from experience. I see a plausible progression to
increasingly more powerful systems. When the system can identify and learn
the behavior of any new object it encounters (and the rules that govern it),
it may then be able to reason about that object and construct plans that
uses the object in novel ways. At first the planning algorithms seek merely
to visit way-points. Future versions, with richer goals, richer models and
more powerful reasoning may autonomously deduce novel behaviors beyond their
explicit programming (e.g., that truck will run into the pedestrian! my
higher goal of not hurting pedestrians means that the best plan is one in
which I stop in front of the truck so that it crashes into me instead of the
pedestrian).


Genetic Algorithms and other search algorithms
---
If you have a "genetic language" that is sufficiently general, and infinite
computing power, then a good genetic algorithm can eventually solve any
computable problem. Evolution eventually discovered human beings - given
infinite computing power, then at worst you could evolve a virtual human! It
seems reasonable then to consider exploring genetic or other search
algorithms that have a bias towards the kinds of problems encountered by
humans and AGI.


Activation, Similarity, Analogizing, HTM, Confabulation and other "targeted"
approaches
---
There seem to be a lot of groups working on specific modes of thought. You
may not be convinced that they're solving enough of the problem, but it
seems plausible to me that maybe general intelligence really is easy once
you've managed to solve some particular problem. That is, we might have a
80/20 rule or even a 99.9/0.1 rule at play with intelligence.

Maybe the brain only does learn a few techniques for problem solv

Re: [agi] The Test

2008-02-05 Thread William Pearson
On 05/02/2008, Mike Tintner <[EMAIL PROTECTED]> wrote:
> William P : I can't think
> of any external test that can't be fooled by a giant look up table
> (ned block thought of this argument first).
>
> A by definition requirement of a "general test" is that the systembuilder
> doesn't set it, and can't prepare for it as you indicate. He can't know
> whether the test for, say, his lego-constructing system is going to be
> building a machine, or constructing a water dam with rocks, or a game that
> involves fitting blocks into holes.

He can't know. but he might guess. It will be hard to test between the
builders lucky guess(es) and generality.

>  His system must be able to adapt to any
> adjacent-domain activity whatsoever. That too is the point of the robot
> challenge test - the roboticists won't know beforehand what that planetary
> camp emergency is going to be.

I think we have different ideas of what a test should be. I am looking
for a scientific test, in which repeatability and fairness are
important features.

One last question what exactly defines adjacent in your test? Is
composing poetry adjacent to solving non-linear equations.

I agree that this type of testing will winnow out lots of non-general
systems.  But it might let a few slip through the cracks or say a
general system is non-general. I would fail the test some days when I
am ill, as all I would want to do is go to sleep not try and solve the
problem.

  Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=93723647-9e9867


Re: [agi] The Test

2008-02-04 Thread Mike Tintner


Benjamin: > I believe that you're misrepresenting the situation. I would 
guess that
most people on this list have an idea that they are pursuing because they 
believe it has a chance at creating general intelligence.


Fine. Which idea of anyone's do you believe will directly produce general 
intelligence  - i.e. will enable an AGI to solve problems in new unfamiliar 
domains, and pass the general test I outlined?  (And everyone surely agrees, 
regardless of the test, that an AGI must have "general" intelligence).


Please note very carefully - I am only asking for an idea that will play a 
direct *part* in solving new-domain problems. Of course *many* ideas will be 
required to do the job completely. I am only asking for one that gives a 
glimmer of hope - and am saying  I haven't seen a single idea that addresses 
that problem/goal directly. A new search algorithm, for example, does not 
address the problem. Neither does a new logic of uncertainty. They might be 
good and useful new ideas, but they don't address the problem. I have, 
however, seen people v. definitely avoiding the problem -and hoping that a 
solution will "emerge"  (not a chance).


And if you do address the problem, I think you'll find that it requires not 
just a creative idea, but a whole new creative *paradigm* of 
problem-solving.


Benjamin: I get the impression from this posting, and your earlier posting 
about a
"Simple mathematical test of cog sci" that you see intelligence as 
something "crazy and spontaneous" (to use your words) - something almost 
magical. With that position, it would seem logical for you to expect a 
solution to AGI to also appear magical.


No, that impression is completely wrong - although since you're the second 
person to say that, maybe it's my fault. I believe we are thinking machines 
and not in any way magical. I just believe that our thinking works on 
different mechanistic/ computational principles to those of programs - which 
someone apart from me, surely should at least question.  It has to be a 
serious *possibility* that programs equal narrow AI, and are the wrong 
paradigm for AGI. Hence, the oft-stated objection:


"The problem with most robots is that they tend to be, well, robotic. They 
know nothing they aren't programmed to know, and can do nothing they aren't 
programmed to do."

Robot Pals. Scientific American Frontiers (April 13, 2005).

So, Ben, show me one idea from anyone that can get a program to do what it 
isn't programmed to do - cross into unfamiliar domains.





-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=93712126-c7dbd9


Re: [agi] The Test

2008-02-04 Thread A. T. Murray
Mike Tintner wrote in the message archived at 
http://www.mail-archive.com/agi@v2.listbox.com/msg09744.html 

> [...]
> The first thing is that you need a definition 
> of the problem, and therefore a test of AGI. 
> And there is nothing even agreed about that - 
> although I think most people know what is required. 
> This was evident in Richard's recent response to 
> ATMurray's recent declaring of his "Agi" system. 
> Richard clearly knew pretty well why that system 
> failed the AGI "test" but he didn't have an explicit 
> definition of the test at his fingertips.

Richard Loosemore "clearly knew pretty well" nothing
of the sort. His was a lazy man's response. He did not 
download and experiment with the MindForth program at
http://mentifex.virtualentity.com/mind4th.html and
http://mind.sourceforge.net/mind4th.html -- he only
made a few generalizations about what he lazily
_thought_ MindForth might be doing. In the archive
http://www.mail-archive.com/agi@v2.listbox.com/msg09674.html
Richard Loosemore vaguely compares sophisticated
MindForth with the canned-reponse "Eliza" program --
which nobody ever claimed was an artificial intelligence.

Richard Loosemore furthermore suggested that all of 
the cognitive processes in the Eysenck & Keane textbook
of Cognitive Psychology would have to be implemented
in MindForth before it could be said to have achieved
True AI functionality. That demand is like telling
Wilbur and Orville Wright that they have to demo
a transatlantic French Concorde jet before they may 
claim to have achieved "true airplane functionality."

Sorry, Richard, but the AI breakthrough functionality
is, plain and simple, the ability to think -- to activate
an associative string of concepts and to express the 
thinking in the generative grammar of Chomsky.

There is no requirement that people be other than
lazy, smug and self-satisfied on this AGI list.
I felt that I should announce the end of the
decade-long process of debugging MindForth AI.

Now the controversy has spilled over to 
http://onsingularity.com/item/3175 
and the dust has not yet settled.

Richard is beginning to act like ESY!
>
> The test, I suggest, is essentially; not the Turing 
> Test or anything like that but "The General Test." 
> If your system is an AGI, or has AGI potential, 
> then it must first of all have a skill and be 
> able to solve problems in a given doman. [...]

The skill of MindForth is spreading activation -- 
from concept to concept -- under the direction of 
a Chomksyan linguistic superstructure.

Now I would like to digress and draw Ben Goertzel's
math-minded attention to my latest "creative idea" at
http://mind.sourceforge.net/computationalization.html#syllogism 
where on 30 January 2008 I thought up and loaded-up:

It may be possible to endow an AI mind with the ability 
to think in syllogisms by creating super-concepts or 
set-concepts above and beyond, and yet in parallel with, 
the ordinary concepts. Certain words like "all" or "never" 
may be coded to duplicate a governed concept and to endow 
the duplicate with only one factual or asserted attribute, 
namely the special relationship modified by the "all" or 
"never" assertion. Take, for instance, the following. 

All fish have tails. 
Tuna are fish. 
Tuna have tails. 

When the AI mind encounters an "all" proposition involving 
the verb "have" and the direct object "tails", a new, 
supervenient concept of "fish-as-set" is created to hold 
only one class of associative nodes -- the simultaneous 
association to "have" and to the "tail" concept. 

Whenever the basic "fish" concept is activated, the 
fish-as-set concept is also activated, ready to "pounce," 
as it were, with the supervenient assertion that all 
fish have tails. Thenceforth, when any animal is identified 
as being a fish by some kind of "isA" tag, the "fish-as-set" 
concept is also activated and the AI mind superveniently 
knows that the animal in question has a tail. The machine 
reasoning could go somewhat like the following dialog. 

Do tuna have tails? 
Are tuna plants? 
Tuna are animals. 
What kind of animals? 
Tuna are fish. 
All fish have tails. 
Tuna have tails. 

The ideas above conform with set theory and with the 
notion of neuronal prodigality -- that there need be 
no concern about wasting neuronal resources -- and with 
the idea of "inheritance" in object-oriented programming (OOP). 

Whereas normally a new fiber might be attached to the 
fiber-gang of a redundantly entertained concept, it is 
just as easy to engender a "concept-as-set" fiber in 
parallel with the original, basic concept. For some 
basic concepts, there might be multiple concept-as-set 
structures reperesenting multiple "all" or "never" ideas 
believed to be the truth about the basic, ordinary concept. 

The AI mind thinking about an ordinary concept in the 
course of problem-solving, does not have to formally engage 
in the 

Re: [agi] The Test

2008-02-04 Thread Benjamin Johnston


Er, you don't ask that in AGI. The general culture here is not to 
recognize the crux, or the "test" of AGI. You are the first person 
here to express the basic requirement of any creative project. You 
should only embark on a true creative project - in the sense of 
committing to it - if you have a creative "idea", i.e. if you have a 
provisional definition of the problem and a partial solution to it, 
one that will make people say, "Yes that might work." (Many more ideas 
will of course usually be required). It's one of the most 
extraordinary phenomena that everyone, but everyone, involved in the 
creative community of AGI resists doing that and has extensive 
rationalisations of why they're not doing that.Every AGI systembuilder 
has several "ideas" about how to do *other* things, that may be 
auxiliary to AGI, like search more efficiently, and logics to deal 
with uncertainty, but no one has offered a "crux" idea.


I believe that you're misrepresenting the situation. I would guess that 
most people on this list have an idea that they are pursuing because 
they believe it has a chance at creating general intelligence.


Some here are research students or professional academics, who enjoy the 
spirit of discussion but are careful about disclosing the specifics of 
their own ideas until they have first been 'timestamped' by publication.


Others have already made their position clear on this list and in 
publication, but it seems that you're rejecting their ideas as not 
"creative" enough. That doesn't mean nobody has offered a "crux" idea, 
it just means that nobody has offered an idea that you believe in. 
Personally, I'm optimistic about many of the ideas that have been 
floated here. I know that many will fail, and that only one can be the 
*first* to create AGI; but I see sufficient cause here for people to 
commit to their ideas and embark on a true creative project to test 
those ideas and discover which ones are workable and which aren't. Not 
everybody is sufficiently well staffed and funded to lay out a roadmap 
for their entire project today, but I'm sure everybody has an idea of 
how their work fits into a big picture, where they ultimately see it 
going, and how their work relates to AGI.


I get the impression from this posting, and your earlier posting about a 
"Simple mathematical test of cog sci" that you see intelligence as 
something "crazy and spontaneous" (to use your words) - something almost 
magical. With that position, it would seem logical for you to expect a 
solution to AGI to also appear magical.


While human intelligence is impressive, I don't think it is inherently 
magical. If you look at a timeline of evolution, you'll see that it took 
billions of years to evolve multi-cellular life, hundreds of millions of 
years to evolve mammals, but the evolutionary time difference between us 
and apes or even between us and mice is, by comparison, very small. 
Creating human-like intelligence doesn't appear to take much extra work 
(for evolution) once you can do mouse-like intelligence.


I like to think about Deep Blue a lot. Prior to Deep Blue, I'm sure that 
there were people who, like you, complained that nobody has offered a 
"crux" idea that could make truly intelligent computer chess system. In 
the end Deep Blue appeared to win largely by brute force computing 
power. What I find most interesting is that Kasparov didn't say he was 
beaten by a particularly strong computer chess system, but claimed to 
see deep intelligence and creativity in the machine's play. That is, he 
didn't think Deep Blue was merely a slightly better version than the 
other chess systems, but he felt it had something else. He was surprised 
by the way the machine was playing, and even accussed the IBM team of 
cheating.


I'm certainly not saying that Deep Blue exhibited general intelligence 
or that it was anything more than a powerful move-searching machine 
(with well designed heuristics); but the fact that Kasparov had played 
many computer systems before, but saw an exceptional intelligence in 
Deep Blue suggests to me that intelligence isn't magical, but is 
something that can emerge when a suitable mechanism performed on a 
sufficient scale. Look at our own brains for example: while a single 
neuron is not yet 100% understood; each neuron appears to perform a 
minimal computation that when combined in the billions emerges to create 
an extremely robust intelligence. Brute force search or assembling 
millions of neurons might not seem like "crux" ideas, but when they are 
used towards a coherent vision, it is possible to create something that 
appears to be deeply intelligent.


I don't know how the first successful AGI will work - it may be based on 
a special logic, a search algorithm, a neural network, a vast knowledge 
base, some new mechanisms, or a hybrid combination of several approachs 
- I think, however, that we have seen many plausible ideas that are 
being pursued in the hope of

Re: [agi] The Test

2008-02-04 Thread Mike Tintner

William P : I can't think
of any external test that can't be fooled by a giant look up table
(ned block thought of this argument first).

A by definition requirement of a "general test" is that the systembuilder 
doesn't set it, and can't prepare for it as you indicate. He can't know 
whether the test for, say, his lego-constructing system is going to be 
building a machine, or constructing a water dam with rocks, or a game that 
involves fitting blocks into holes.  His system must be able to adapt to any 
adjacent-domain activity whatsoever. That too is the point of the robot 
challenge test - the roboticists won't know beforehand what that planetary 
camp emergency is going to be.


("External testing", BTW, I suggest, should be a fundamental bottom-up part 
of the culture of all AGI and robot systembuilding. Human students don't get 
to set their own exams/ intelligence tests! It would be absurd).


What I think is so useful about the idea of a "general" test is that you 
*don't* try to define it specifically in advance -  other than as an 
"adjacent domain" test.  So it automatically applies to any would-be AGI 
whatsoever and at any level - whether it's say a snake-like system that only 
knows how to navigate through different terrains, or a complex would-be 
humanoid system that claims conversational powers. I think the latter is 
wildly unrealistic and unlikely to happen till the distant future, but it 
doesn't matter - the general test would still be applicable. And you would 
then have a focussed Turing test - if your system claims knowledge and can 
converse about one domain, then it should be able to learn and converse 
about a new but related domain.


If you had a general test as a focus, too, you wouldn't, I suggest, get AGI 
systembuilders wasting years of their lives on ill-defined projects, as has 
clearly happened and will otherwise continue to happen.




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=93653402-e18623


Re: [agi] The Test

2008-02-04 Thread William Pearson
On 04/02/2008, Mike Tintner <[EMAIL PROTECTED]> wrote:
> (And it's a fairly safe bet, Joseph, that no one will now do the obvious
> thing and say.." well, one idea I have had is...", but many will say, "the
> reason why we can't do that is...")

And maybe they would have a reason for doing so. I would like to think
of an external objective test, I like tests and definitions. My
stumbling block for thinking of external tests is that I can't think
of any external test that can't be fooled by a giant look up table
(ned block thought of this argument first). That is something that
when input X comes in at time t, output Y goes out. It can pretend to
learn things by having poor performance early on and then "improve".

Not all designs of systems use lots of external tests to prove their
abilities. Take making a new computer architecture that you want to
have the property of computational universality. You wouldn't try to
give it a few programs see if it can run them and declare it universal
you would program it to emulate a Turing Machine to prove its
universality. Similarly for new chip designs of existing architecture,
you want to prove them equivalent to the old ones.

Generality of an intelligence is this sort of problem I think, due to
the inability to capture it flawlessly with external tests. I would be
interested to discuss internal requirements of systems, if anyone else
is.

I'd have thought that you with your desire for things to be
spontaneous would be wary of any external test that can be gamed by
non-spontaneous systems.

  Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=93443404-a1be29


[agi] The Test

2008-02-04 Thread Mike Tintner


Joseph Gentle:> Eventually, you will have to write something which allows 
for emergent

behaviour and complex communication. To me, that stage of your project
is the interesting crux of AGI. It should have some very interesting
emergant behaviour with inputs other than the information SLAM
outputs... Why not just work on that difficult part now?


Er, you don't ask that in AGI. The general culture here is not to recognize 
the crux, or the "test" of AGI. You are the first person here to express the 
basic requirement of any creative project. You should only embark on a true 
creative project - in the sense of committing to it - if you have a creative 
"idea", i.e. if you have a provisional definition of the problem and a 
partial solution to it, one that will make people say, "Yes that might 
work." (Many more ideas will of course usually be required). It's one of the 
most extraordinary phenomena that everyone, but everyone, involved in the 
creative community of AGI resists doing that and has extensive 
rationalisations of why they're not doing that.Every AGI systembuilder has 
several "ideas" about how to do *other* things, that may be auxiliary to 
AGI, like search more efficiently, and logics to deal with uncertainty, but 
no one has offered a "crux" idea.


The first thing is that you need a definition of the problem, and therefore 
a test of AGI. And there is nothing even agreed about that - although I 
think most people know what is required. This was evident in Richard's 
recent response to ATMurray's recent declaring of his "Agi" system. Richard 
clearly knew pretty well why that system failed the AGI "test" but he didn't 
have an explicit definition of the test at his fingertips.


The test, I suggest, is essentially; not the Turing Test or anything like 
that but "The General Test." If your system is an AGI, or has AGI potential, 
then it must first of all have a skill and be able to solve problems in a 
given doman. The "test" is then: can it a) independently learn a skill in an 
adjacent domain, and/or  b) pass a problemsolving test in an adjacent domain 
(to be set by someone other than the systembuilder!). If it can play soccer, 
can it learn how to play rugby and solve problems in rugby? If it can build 
Lego constructions, can it learn to build a machine? If it can search for 
hidden items,  can it learn to play hide-and-seek? The General Test then is 
simply a test of whether a system can generalize its skill(s). If it knows 
how to put together a set of elements in certain kinds of ways, can it then 
learn to put those same elements together [and perhaps some new ones] in new 
kinds of ways?


The robotic challenge test set by the ICRA is a good one, precisely because 
it is a "General Test,"  requiring robotbuilders to solve *any* breakdown of 
equipment that may reasonably occur in a planetary exploration camp - and 
generalize their existing repair skills.


That's what people should be doing here centrally - discussing and 
exchanging their ideas about how to solve the General Test. The fact that no 
one is discussing this (despite vast volumes of overall discussion) suggests 
very powerfully that no one *has* an idea.


(And it's a fairly safe bet, Joseph, that no one will now do the obvious 
thing and say.." well, one idea I have had is...", but many will say, "the 
reason why we can't do that is...") 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=93375464-9c07fc