[agi] Legg and Hutter on resource efficiency was Re: Yawn.

2008-01-14 Thread William Pearson
On 14/01/2008, Pei Wang [EMAIL PROTECTED] wrote:
 On Jan 13, 2008 7:40 PM, Richard Loosemore [EMAIL PROTECTED] wrote:

  And, as I indicated, my particular beef was with Shane Legg's paper,
  which I found singularly content-free.

 Shane Legg and Marcus Hutter have a recent publication on this topic,
 http://www.springerlink.com/content/jm81548387248180/
 which is much richer in content.


I think this can also be found here

http://arxiv.org/abs/0712.3329

For those of us without springerlink accounts.


While we do not consider efficiency to be a part of the definition of
intelligence, this is not to say that considering the efficiency of
agents is unimportant. Indeed, a key goal of artificial intelligence is
to find algorithms which have the greatest efficiency of intelligence,
that is, which achieve the most intelligence per unit of computational
resources consumed.

Why not consider resource efficiciency a thing to be adapted? Over
which problems can be solved.

An example. consider 2 android robots with finite energy supplies
tasked with a long foot race.

One shuts down all processing non-essential to its current task of
running (sound familiar to what humans do? I certainly think better
walking), so it uses less energy.

The other one attempts to find programs that precisly predict its
input given its output, churning through billions of possibilities and
consuming vast amounts of energy.

The one that shuts down its processing finishes the race and gets
reward, the other one runs its battery down by processing too much and
has to be rescued, getting no reward.

As they have defined it only outputting can make the system more or
less likely to achieve a goal. Which is a narrow view.

  Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=85547641-0ef2b3

Re: Yawn. More definitions of intelligence? [WAS Re: [agi] Ben's Definition of Intelligence]

2008-01-14 Thread Benjamin Goertzel
Richard,

I don't think Shane and Marcus's overview of definitions-of-intelligence
is poor quality.

I think it is just doing something different than what you think it should be
doing.

The overview is exactly that: A review of what researchers have said about
the definition of intelligence.

This is useful as a view into the cultural mind-space of the research
community regarding the intelligence concept.

As for their formal definition of intelligence, I think it is worthwhile as a
precise formulation of one perspective on the multidimensional concept
of intelligence.  I don't agree with them that they have somehow captured
the essence of the concept of intelligence in their formal definition though;
I think they have just captured one aspect...

-- Ben G


On 1/14/08, Richard Loosemore [EMAIL PROTECTED] wrote:
 Pei Wang wrote:
  On Jan 13, 2008 7:40 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
 
  And, as I indicated, my particular beef was with Shane Legg's paper,
  which I found singularly content-free.
 
  Shane Legg and Marcus Hutter have a recent publication on this topic,
  http://www.springerlink.com/content/jm81548387248180/
  which is much richer in content.

 Unfortunately, this paper is not so much richer in content as
 containing a larger number of words and formulae.  It adds nothing to
 the previous (poor quality) paper, falls into exactly the same pitfalls
 as before, and repeats the trick of pulling an arbitrary mathematical
 definition out of the hat without saying why this definition should
 correspond with the natural or commonsense definition.

 Any fool can mathematize a definition of a commonsense idea without
 actually saying anything new.


 Richard Loosemore

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=85626959-5ab00c


[agi] Re: [singularity] The establishment line on AGI

2008-01-14 Thread Benjamin Goertzel
 Also, this would involve creating a close-knit community through
 conferences, journals, common terminologies/ontologies, email lists,
 articles, books, fellowships, collaborations, correspondence, research
 institutes, doctoral programs, and other such devices. (Popularization is
 not on the list of community-builders, although it may have its own value.)
 Ben has been involved in many efforts in these directions -- I wonder if he
 was thinking of Kuhn.

Indeed, working toward the formation of such a community is one of the
motivations underlying the AGI-08 conference.   And also underlying the
OpenCog AGI project I'm initiating together with the SIAI, see
opencog.org

My prior efforts in this direction, such as

-- AGI email list
-- 2006 AGI workshop
-- two AGI edited volumes

have been successful but smaller-scale.

My feeling is that the time is ripe for the self-organization of a really viable
AGI research community.

In connection with AGI-08, we have put up a wiki page intended to
gather proposals and suggestions regarding the formation of a more
robust AGI community

http://www.agi-08.org/proposals.php

If any of y'all have relevant ideas, feel free to post them there.

I don't actually have a lot of time for community-building activities, as my
main focus is on Novamente LLC (and Novamente's work on AGI plus its
narrow-AI consulting work that pays my bills).  But, I try to make time for
community-building, because I think it's very important and will benefit
all of us working in the field.

I did read Kuhn back in college, and was impressed with his insight,
along with (even more so) that of Imre Lakatos, with his theory of
scientific research programmes.  In Lakatos's terms, what needs to be
done is to build a community that can turn AGI into an overall
progressive research program.  I discuss these philosophy of science
ideas a bit in the Hidden Pattern, and earlier in an essay

http://www.goertzel.org/dynapsyc/2004/PhilosophyOfScience_v2.htm

Further back, I remember when I was 5 years old, reading a draft of
a book my dad was writing (a textbook of Marxist sociology), and
encountering the word paradigm and not knowing what it meant.
As I recall, I asked him and he tried to explain and I did not understand
the explanation very well ;-p ... and truth be told, I still find it a fuzzy
term, preferring Lakatos's characterization of research programmes.
However, Kuhn had more insight than Lakatos into the sociological
dynamics surrounding scientific research programmes...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=85618740-3a68d2


Re: Yawn. More definitions of intelligence? [WAS Re: [agi] Ben's Definition of Intelligence]

2008-01-14 Thread Richard Loosemore

Pei Wang wrote:

On Jan 13, 2008 7:40 PM, Richard Loosemore [EMAIL PROTECTED] wrote:


And, as I indicated, my particular beef was with Shane Legg's paper,
which I found singularly content-free.


Shane Legg and Marcus Hutter have a recent publication on this topic,
http://www.springerlink.com/content/jm81548387248180/
which is much richer in content.


Unfortunately, this paper is not so much richer in content as 
containing a larger number of words and formulae.  It adds nothing to 
the previous (poor quality) paper, falls into exactly the same pitfalls 
as before, and repeats the trick of pulling an arbitrary mathematical 
definition out of the hat without saying why this definition should 
correspond with the natural or commonsense definition.


Any fool can mathematize a definition of a commonsense idea without 
actually saying anything new.



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=85625387-40ef44


Re: Yawn. More definitions of intelligence? [WAS Re: [agi] Ben's Definition of Intelligence]

2008-01-14 Thread Richard Loosemore

Benjamin Goertzel wrote:

Richard,

I don't think Shane and Marcus's overview of definitions-of-intelligence
is poor quality.


I'll explain why I said poor quality.

In my experience of marking student essays, there is a stereotype of the 
night before deadline essay, which goes like this.  If the topic is X, 
the student grabs a bunch of definitions that other people have given of 
X, and they start the essay by saying Well, we don't really know what X 
is in all it's [sic] glory, but So-and-so has said [definition 1].  In 
contrast, So-and-so-other has disagreed and said that [definition 2] 
.. and on and on through a long and miserable list of quotations.


Then, realizing that something more is needed, the essay writer winds up 
with a commentary that comes out of nowhere and arbitrarily declares 
that some point of view or some formula is probably the best.


In reading Legg and Hutter's first essay in the AGIRI-06 workshop, and 
now their more recent expansion of that paper, I see no difference 
between what they did and the stereotypic night-before-deadlne essay. 
That is why I comdemned it with the phrase poor quality.


As for your other (very diplomatic) comment, phlogiston was also a nice 
example of a precise formulation of one perspective on the 
multidimensional concept of combustion.  Multidimensional concepts are 
sometimes not what they are cracked up to be.


Your job is to be diplomatic.  Mine is to call a spade a spade. ;-)


Richard Loosemore




I think it is just doing something different than what you think it should be
doing.

The overview is exactly that: A review of what researchers have said about
the definition of intelligence.

This is useful as a view into the cultural mind-space of the research
community regarding the intelligence concept.

As for their formal definition of intelligence, I think it is worthwhile as a
precise formulation of one perspective on the multidimensional concept
of intelligence.  I don't agree with them that they have somehow captured
the essence of the concept of intelligence in their formal definition though;
I think they have just captured one aspect...

-- Ben G


On 1/14/08, Richard Loosemore [EMAIL PROTECTED] wrote:

Pei Wang wrote:

On Jan 13, 2008 7:40 PM, Richard Loosemore [EMAIL PROTECTED] wrote:


And, as I indicated, my particular beef was with Shane Legg's paper,
which I found singularly content-free.

Shane Legg and Marcus Hutter have a recent publication on this topic,
http://www.springerlink.com/content/jm81548387248180/
which is much richer in content.

Unfortunately, this paper is not so much richer in content as
containing a larger number of words and formulae.  It adds nothing to
the previous (poor quality) paper, falls into exactly the same pitfalls
as before, and repeats the trick of pulling an arbitrary mathematical
definition out of the hat without saying why this definition should
correspond with the natural or commonsense definition.

Any fool can mathematize a definition of a commonsense idea without
actually saying anything new.


Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=85635979-4e972e


[agi] Comments on Pei Wang's What Do You Mean by “AI”?

2008-01-14 Thread Richard Loosemore

Pei,

I have a few thoughts about your paper.

Your classification scheme for different types of intelligence 
definition seems to require that the concepts of percepts, actions 
and states be objectively measurable or identifiable in some way.


I see this as a problem, because the concept of a percept (say) may 
well be so intimately connected with what intelligent systems do that we 
could never say what counts as a percept without making reference to an 
intelligent system.


For example, would it count if an intelligent human perceived an 
example of a very abstract concept like (e.g.) hegemony?  Would 
[hegemony] be a percept, or would you only allow primitive percepts that 
are directly picked up by an intelligence, like oriented line segments?


More precisely, I think that percepts like [hegemony] are indeed bona 
fide percepts, but they are very unlikely to be defined without making 
reference to the systems that developed the concept.  So [hegemony] does 
not have a closed-form definition and the best we can do is to say that 
among a large population of human intelligences, there is a point in 
concept-space that is given the word-label hegemony but if you were to 
look inside each individual mind you would find that the same name is 
actually a unique cluster of connections to other concepts (each of 
which, in turn, has its own subtle differences among all the different 
individuals).


The same story can be told about actions, and internal states.

But if there is only a loose correspondence (across individuals) between 
terms labelled with the same name, then how can we even begin to think 
that the act of comparing states, percepts and actions between computers 
and humas (as you do in your paper) would be a good way to dissect the 
different meanings of intelligence?


The only way out of this problem would be to define some normative 
central tendency of the Ps, Ss and As across the population of actual 
intelligent agents (i.e. human minds, at this time in history) and then 
move on from there.  But of course, that would be tantamount to 
declaring that intelligence is basically defined by what human minds do.


My point here is not to ask questions about how percepts map onto one 
another (across individuals), but to say that the very question of which 
things count as percepts cannot be answered without looking at the 
chunks that have actually been formed by human beings.  To put this in 
stark relief:  suppose we came across an alien intelligence that did not 
use oriented line segments at a very low level of its visual system, but 
instead used pairs of blobs, separated by different distances across the 
retina.  Eveything it perceived above this level would then be various 
abstractions of that basic visual building block.  This alien mind would 
be carving nature along different joints - parsing it very differently - 
and it might well be that it simply never constructs high level concepts 
that map onto our own concepts, so the two sets of percepts (ours and 
theirs) are just not comparable.  They might never perceive an example 
of hegemony, and we might never be able to perceive an instance of one 
of their concepts either.  How would we then - even in principle - start 
talking about whether the same percepts give rise to the same iternal 
states in the two systems?  The percepts would depend too much on the 
actual structure of the two different minds.  The 'percepts would not 
be objective things.


I see no way out of this, because I cannot see any way that the abstract 
notion of *objective* percepts, states and actions can be justified. 
The validity of any proposed objective scheme can be challenged, and so 
long as it can be challenged, the notion of percepts, states and actions 
cannot be used as a starting point for a discussion of what 
intelligence actually is.


What do you think?




Richard Loosemore


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=85650209-afcd20


Re: Yawn. More definitions of intelligence? [WAS Re: [agi] Ben's Definition of Intelligence]

2008-01-14 Thread Benjamin Goertzel
 Your job is to be diplomatic.  Mine is to call a spade a spade. ;-)


 Richard Loosemore

I would rephrase it like this: Your job is to make me look diplomatic ;-p

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=85657842-0be0ab


Re: Yawn. More definitions of intelligence? [WAS Re: [agi] Ben's Definition of Intelligence]

2008-01-14 Thread Richard Loosemore

Benjamin Goertzel wrote:

Your job is to be diplomatic.  Mine is to call a spade a spade. ;-)


Richard Loosemore


I would rephrase it like this: Your job is to make me look diplomatic ;-p


I agree: I am undiplomatic and unreasonable.

The reasonable man adapts himself to the world.  The unreasonable one 
persists in trying to adapt the world to himself


Dear me, two quotes from Man and Superman in one week. ;-)




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=85663214-cf41b9


Re: [agi] Comments on Pei Wang 's What Do You Mean by “AI”?

2008-01-14 Thread Pei Wang
Richard,

Thanks for the detailed comments!

If you spend some time in my semantic theory, you will see that I
never believe any concept can get any kind of objective meaning or
true definition. All meanings depend on an observer, with its
observation ability and limitation. The so called objective meaning
is just commonly agreed meaning in a community of observers, which
is not as subjective as one observer's own idea.

Therefore, I agree with your analysis that percepts, actions and
states cannot be objectively measurable or identifiable concepts
--- actually no concept can be.

However, it doesn't mean that everything goes and we cannot make any
meaningful analysis to any situation.

Especially, it doesn't mean intelligence as a concept cannot have a
stable working definition, as the goal of a research project, since
our current understanding on the topic is clearly limited.

In my paper, I do assume we can meaningfully use words like
percepts, actions, and states according to our intuitive
understanding of them --- we have to start from somewhere. To me,
these concepts, though have no objective meaning, are still much
less complicated than the concept of intelligence, and different
understanding about them won't have too big a impact to the conclusion
of the paper.

By the way, the word percepts, borrowed from RussellNorvig, doesn't
mean perception, which is much more subjective.

I uses a conceptual framework consists of  percepts, actions, and
states, not because I think these concepts are objective, but
because they can help us to show the difference among various
understandings of intelligence. Even for the concept of
intelligence, I'm not trying to find its true meaning, but to show
where different understandings will lead the research.

Of course, all of these opinions are based on my biased experience,
and restricted by my insufficient resources in processing them, so
therefore is neither objective nor fully formalized. However, I don't
believe those properties are the most important ones for defining
intelligence at the current time.

Pei


On Jan 14, 2008 11:20 AM, Richard Loosemore [EMAIL PROTECTED] wrote:
 Pei,

 I have a few thoughts about your paper.

 Your classification scheme for different types of intelligence
 definition seems to require that the concepts of percepts, actions
 and states be objectively measurable or identifiable in some way.

 I see this as a problem, because the concept of a percept (say) may
 well be so intimately connected with what intelligent systems do that we
 could never say what counts as a percept without making reference to an
 intelligent system.

 For example, would it count if an intelligent human perceived an
 example of a very abstract concept like (e.g.) hegemony?  Would
 [hegemony] be a percept, or would you only allow primitive percepts that
 are directly picked up by an intelligence, like oriented line segments?

 More precisely, I think that percepts like [hegemony] are indeed bona
 fide percepts, but they are very unlikely to be defined without making
 reference to the systems that developed the concept.  So [hegemony] does
 not have a closed-form definition and the best we can do is to say that
 among a large population of human intelligences, there is a point in
 concept-space that is given the word-label hegemony but if you were to
 look inside each individual mind you would find that the same name is
 actually a unique cluster of connections to other concepts (each of
 which, in turn, has its own subtle differences among all the different
 individuals).

 The same story can be told about actions, and internal states.

 But if there is only a loose correspondence (across individuals) between
 terms labelled with the same name, then how can we even begin to think
 that the act of comparing states, percepts and actions between computers
 and humas (as you do in your paper) would be a good way to dissect the
 different meanings of intelligence?

 The only way out of this problem would be to define some normative
 central tendency of the Ps, Ss and As across the population of actual
 intelligent agents (i.e. human minds, at this time in history) and then
 move on from there.  But of course, that would be tantamount to
 declaring that intelligence is basically defined by what human minds do.

 My point here is not to ask questions about how percepts map onto one
 another (across individuals), but to say that the very question of which
 things count as percepts cannot be answered without looking at the
 chunks that have actually been formed by human beings.  To put this in
 stark relief:  suppose we came across an alien intelligence that did not
 use oriented line segments at a very low level of its visual system, but
 instead used pairs of blobs, separated by different distances across the
 retina.  Eveything it perceived above this level would then be various
 abstractions of that basic visual building block.  This alien mind would
 be carving 

Re: Yawn. More definitions of intelligence? [WAS Re: [agi] Ben's Definition of Intelligence]

2008-01-14 Thread Mike Tintner
I heavily agree with you, Richard. But perhaps the Hutter exercise has some 
value - simply by way of making us question the validity of any mathematical 
approach to intelligence.


Well, there IS some value,  (although BTW, at a glance they don't seem to 
recognize that IQ is not even a direct measure of intelligence). 
Intelligence of any kind does involve computational powers. One person may 
be a faster or more complex thinker than another. You can measure that.


But what precise mathematical figures can we put on the relative 
intelligence of :


*the Gettysburg address
*the I have a dream speech
*Obama's we can speech

*the Ipod
*the Creative Zen
*the Archos

*Novamente
*LIDA
* (does Richard's have a name? ANGLOFILE? )

*Ben's lovemaking
*Franklin's lovemaking
*Richard's lovemaking

*Ben's posts
*Richard's posts
*Pei's posts

*Macbeth
*Scarface
*The Roaring Twenties

etc etc ad infinitum.

Like all the psychologists they rely on, they have an EXTREMELY limited idea 
of intelligence, and how it is applied, (and that's being generous).


Richard,  In my experience of marking student essays, there is a stereotype 
of the
night before deadline essay, which goes like this.  If the topic is X, 
the student grabs a bunch of definitions that other people have given of 
X, and they start the essay by saying Well, we don't really know what X 
is in all it's [sic] glory, but So-and-so has said [definition 1].  In 
contrast, So-and-so-other has disagreed and said that [definition 2] 
.. and on and on through a long and miserable list of quotations.


Then, realizing that something more is needed, the essay writer winds up 
with a commentary that comes out of nowhere and arbitrarily declares that 
some point of view or some formula is probably the best.


In reading Legg and Hutter's first essay in the AGIRI-06 workshop, and now 
their more recent expansion of that paper, I see no difference between 
what they did and the stereotypic night-before-deadlne essay. That is why 
I comdemned it with the phrase poor quality.


As for your other (very diplomatic) comment, phlogiston was also a nice 
example of a precise formulation of one perspective on the 
multidimensional concept of combustion.  Multidimensional concepts are 
sometimes not what they are cracked up to be.


Your job is to be diplomatic.  Mine is to call a spade a spade. ;-)


Richard Loosemore



I think it is just doing something different than what you think it 
should be

doing.

The overview is exactly that: A review of what researchers have said 
about

the definition of intelligence.

This is useful as a view into the cultural mind-space of the research
community regarding the intelligence concept.

As for their formal definition of intelligence, I think it is worthwhile 
as a

precise formulation of one perspective on the multidimensional concept
of intelligence.  I don't agree with them that they have somehow captured
the essence of the concept of intelligence in their formal definition 
though;

I think they have just captured one aspect...

-- Ben G


On 1/14/08, Richard Loosemore [EMAIL PROTECTED] wrote:

Pei Wang wrote:

On Jan 13, 2008 7:40 PM, Richard Loosemore [EMAIL PROTECTED] wrote:


And, as I indicated, my particular beef was with Shane Legg's paper,
which I found singularly content-free.

Shane Legg and Marcus Hutter have a recent publication on this topic,
http://www.springerlink.com/content/jm81548387248180/
which is much richer in content.

Unfortunately, this paper is not so much richer in content as
containing a larger number of words and formulae.  It adds nothing to
the previous (poor quality) paper, falls into exactly the same pitfalls
as before, and repeats the trick of pulling an arbitrary mathematical
definition out of the hat without saying why this definition should
correspond with the natural or commonsense definition.

Any fool can mathematize a definition of a commonsense idea without
actually saying anything new.


Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



--
No virus found in this incoming message.
Checked by AVG Free Edition. Version: 7.5.516 / Virus Database: 
269.19.2/1221 - Release Date: 1/12/2008 2:04 PM






-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=85666781-a1e2a3


Re: Yawn. More definitions of intelligence? [WAS Re: [agi] Ben's Definition of Intelligence]

2008-01-14 Thread Mike Dougherty
On Jan 14, 2008 10:10 AM, Richard Loosemore [EMAIL PROTECTED] wrote:
 Any fool can mathematize a definition of a commonsense idea without
 actually saying anything new.

Ouch.  Careful.  :)  That may be true, but it takes $10M worth of
computer hardware to disprove.

disclaimer:  that was humor

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=85679890-97e868


Re: [agi] Comments on Pei Wang 's What Do You Mean by “AI”?

2008-01-14 Thread Pei Wang
Will,

The situation you mentioned is possible, but I'd assume, given the
similar functions from percepts to states, there must also be similar
functions from states to actions, that is,
   AC = GC(SC), AH = GH(SH), GC ≈ GH

Consequently, it becomes a special case of my Principle-AI, with a
compound function:
   AC = GC(FC(PC)), AH = GH(FH(PH)), GC(FC()) ≈ GH(FH())

Pei


2008/1/14 William Pearson [EMAIL PROTECTED]:
 Something I noticed while trying to fit my definition of AI into the
 categories given.

 There is another way that definitions can be principled.

 This similarity would not be on the function of percepts to action.
 Instead it would require a similarity on the function of percepts to
 internal state as well. That is they should be able to adapt in a
 similar fashion.

 SC = FC(PC), SH = FH(PH), FC ≈ FH

 I'm not strictly speaking working on intelligence at the moment,
 rather how to build adaptive programmable computer architectures
 (which I think is a necessary first step to intelligence), so it might
 take me a while to get around to fully working out my definition of
 intelligence. It would contain principles like the one I mention above
 though.

   Will Pearson

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=85815783-9a4a33

Re: [agi] Comments on Pei Wang 's What Do You Mean by “AI”?

2008-01-14 Thread William Pearson
Something I noticed while trying to fit my definition of AI into the
categories given.

There is another way that definitions can be principled.

This similarity would not be on the function of percepts to action.
Instead it would require a similarity on the function of percepts to
internal state as well. That is they should be able to adapt in a
similar fashion.

SC = FC(PC), SH = FH(PH), FC ≈ FH

I'm not strictly speaking working on intelligence at the moment,
rather how to build adaptive programmable computer architectures
(which I think is a necessary first step to intelligence), so it might
take me a while to get around to fully working out my definition of
intelligence. It would contain principles like the one I mention above
though.

  Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=85825259-71606f

Re: [agi] Comments on Pei Wang 's What Do You Mean by “AI”?

2008-01-14 Thread William Pearson
On 14/01/2008, Pei Wang [EMAIL PROTECTED] wrote:
 Will,

 The situation you mentioned is possible, but I'd assume, given the
 similar functions from percepts to states, there must also be similar
 functions from states to actions, that is,
AC = GC(SC), AH = GH(SH), GC ≈ GH

Pei,

Sorry I should have thought more. I would define the similarity of the
functions that it is possible to be interested in as.

St =  F(S(t-1),P)

That is the current state is important to what change is made to the
state. For example a man coming across the percept Oui, bien sieur,
would change his state in a different way depending upon whether he
was already fluent in french or not.

This doesn't really change the rest of your argument, but I feel it is
important.

 Consequently, it becomes a special case of my Principle-AI, with a
 compound function:
AC = GC(FC(PC)), AH = GH(FH(PH)), GC(FC()) ≈ GH(FH())

 Pei

To be pedantic (feel free to ignore the following if you like):

That would depend on whether the ≈ relation is exactly. If you assume
it has the same meaning when used above there are possible meanings
for it where the relation (FC ≈ FH  GC ≈GH) does not imply (GC(FC())
≈ GH(FH())).

Consider the meaning of ≈ x and y are similar because they can be
transformed to a reference programs of a reference language of the
same length + or - 20 bytes. This would mean the representation for
GC(FC()) would be within + or - 40 bytes of GH(FH()). Which wouldn't
be the same relation.

A bit contrived I know, but as we are working on the theoretical side
of things, this is the best example I could think of at short notice.

Until I get a better feeling of my own definition, I can't really say
much more that is really useful.
   Will

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=85847138-e90417

Re: [agi] Comments on Pei Wang 's What Do You Mean by “AI”?

2008-01-14 Thread Pei Wang
2008/1/14 William Pearson [EMAIL PROTECTED]:

 I would define the similarity of the
 functions that it is possible to be interested in as.

 St =  F(S(t-1),P)

 That is the current state is important to what change is made to the
 state. For example a man coming across the percept Oui, bien sieur,
 would change his state in a different way depending upon whether he
 was already fluent in french or not.

 This doesn't really change the rest of your argument, but I feel it is
 important.

That is correct for all deterministic systems, like Turing Machine.
However, I really don't like to describe the internal situations of a
system (or the external situation of its environment) using state.
Though it is the common practice, this notion implies that the
description is complete and precise, which is often impossible. In
this paper, you can see that I only mentioned state in the first
category (Structure-AI), and leave it out for the other categories,
even though for those we still can discuss their states, as you
suggested.

  Consequently, it becomes a special case of my Principle-AI, with a
  compound function:
 AC = GC(FC(PC)), AH = GH(FH(PH)), GC(FC()) ≈ GH(FH())

 That would depend on whether the ≈ relation is exactly. If you assume
 it has the same meaning when used above there are possible meanings
 for it where the relation (FC ≈ FH  GC ≈GH) does not imply (GC(FC())
 ≈ GH(FH())).

Of course. What I gave is a very rough relation.

 Consider the meaning of ≈ x and y are similar because they can be
 transformed to a reference programs of a reference language of the
 same length + or - 20 bytes. This would mean the representation for
 GC(FC()) would be within + or - 40 bytes of GH(FH()). Which wouldn't
 be the same relation.

No, that is not the kind of situation I'm talking about. At the
current stage, I'm not really trying to propose a quantitative
measurement for intelligence or the similarity between systems.
Instead, I'm looking for qualitative difference among working
definitions of intelligence. I just have to assume that it is
meaningful to talk about the similarity between systems in several
aspects, and that will be enough for the conclusion of the paper.

Pei

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=85850863-74d8d0