Well, in my 1993 book "The Structure of Intelligence" I defined intelligence
as

"The ability to achieve complex goals in complex environments."

I followed this up with a mathematical definition of complexity grounded in
algorithmic information theory (roughly: the complexity of X is the amount
of
pattern immanent in X or emergent between X and other Y's in its
environment).

This was closely related to what Hutter and Legg did last year, in a more
rigorous
paper that gave an algorithmic information theory based definition of
intelligence.

Having put some time into this sort of definitional work, I then moved on to
more
interesting things like figuring out how to actually make an intelligent
software system
given feasible computational resources.

The catch with the above definition is that a truly general intelligence is
possible
only w/ infinitely many computational resources.  So, different AGIs may be
able
to achieve different sorts of complex goals in different sorts of complex
environments.
And if an AGI is sufficiently different from us humans, we may not even be
able
to comprehend the complexity of the goals or environments that are most
relevant
to it.

So, there is a general theory of what AGI is, it's just not very useful.

To make it pragmatic one has to specify some particular classes of goals and
environments.  For example

goal = getting good grades
environment = online universities

Then, to connect this kind of pragmatic definition with the mathematical
definition, one would have the prove the complexity of the goal (getting
good
grades) and the environment (online universities) based on some relevant
computational model.  But the latter seems very tedious and boring work...

And IMO, all this does not move us very far toward AGI, though it may help
avoid some conceptual pitfalls that could have been fallen into otherwise...

-- Ben G
On 4/24/07, Mike Tintner <[EMAIL PROTECTED]> wrote:

 Hi,

I strongly disagree - there is a need to provide a definition of AGI - not
necessarily the right or optimal definition, but one that poses concrete
challenges and focusses the mind - even if it's only a starting-point. The
reason the Turing Test has been such a successful/ popular idea is that it
focusses the mind.

(BTW I immediately noticed your lack of a good definition on going through
your site and papers, and it immediately raised doubts in my mind. In
general, the more or less focussed your definition/ mission statement, I
would argue, the more or less seriously people will tend to take you).

Ironically, I was just trying to take Marvin Minsky to task for this on
another forum. I suddenly realised that although he has been talking about
the problem of AGI for decades, he has only waved at it, and not really
engaged with it. He talks  about how  having different ways of thinking
about a problem like the human mind does, is important for AGI  - and that's
certainly one central problem/ goal - but he doesn't really focus it.

Here's my first crack at a definition - very crude - offered strictly in
brainstorming mode - but I think it does focus a couple of AGI challenges at
least - and fits with some of the stuff you say.

 AN AGI MACHINE - a truly adaptive, truly learning machine - is one that
will be able to:

1) conduct a set of goal-seeking activities

- where it starts with only a rough, incomplete idea of how to reach its
goals,

- i.e. knows only some of the steps it must take, & some of the rules that
govern those steps

- and can find its way to its goals "making it up as it goes along"

- by finding new ways round more or less unfamiliar obstacles.

To do this it must be able to:

2) Change its steps and rules -

-not just revising them according to predetermined formulae but

-adding new steps and rules, & even

-creating new rules, that break existing ones.

3) can learn new related activities


[[The key things in this definition for me are that it focusses on the
need for AGI to be able to radically change the steps and rules of any
activity it undertakes].

EXAMPLE: {again a very crude one - first that came to mind]:

An AGI machine would be a SPORTING ROBOT that first could learn to play
soccer, as we do,  by being taught a few basic principles [like "try to
score a goal by running towards the goal with the ball, or passing it to
other team members, ...." and shown a few soccer games.

It would then be able to learn the game as it goes along, by playing. And
should be able to find and learn new routes to goal,  new passes, new kicks
(with perhaps new spins and backswings),  It should even be able to adapt
its rules, - adding new ones like "you can move back towards your own goal
when you have the ball, as well as forwards towards the opponent's"

And having learned soccer, it should be able to learn OTHER FIELD/
COURT SPORTS in similar fashion, -  like Gaelic football, hockey,
basketball, etc. etc.

[Comment: Perhaps much too extravagant a starting-goal - maybe better to
have a maze-running robot that can learn to run radically different and
suprising kinds of mazes - but once objections are considered, more
realistic goals can be set]


----- Original Message -----

*From:* Benjamin Goertzel <[EMAIL PROTECTED]>
*To:* singularity@v2.listbox.com
*Sent:* Tuesday, April 24, 2007 9:50 PM
*Subject:* Re: [singularity] Why do you think your AGI design will work?


Hi,

We don't have any solid **proof** that Novamente will "work" in the sense
of leading to powerful AGI.

We do have a set of mathematical conjectures that look highly plausible
and that, if true, would imply that Novamente will work (if properly
implemented and a bunch of details are gotten right, etc.).   But we have
not proved these conjectures and are not currently focusing on proving them,
as that is a big hard job in itself....  We have decided to seek proof via
practical construction and experimentation rather than proof via formal
mathematics.

Wright Bros. did not prove their airplane would work before building it.
But they were confident based on their intuitive theoretical model of
aerodynamics, which turned out to be correct.  The case with Novamente is a
bit more rigorous than this because we have gotten to the point of stating
but not proving mathematical conjectures that would imply the workability of
the system...

As for Matt Mahoney's point about "definining AGI" being the bottleneck, I
really think that is a red herring.  Rigorously defining any natural
language term is a pain.  You can play for hours with the definition of
"cup" versus "bowl", or the definition of "flight" versus "leaping" versus
"floating in space", etc.  Big deal!

-- Ben G


-- Ben G





On 4/24/07, Joshua Fox <[EMAIL PROTECTED]> wrote:

>    Ben has confidently stated that he believes Novamente will work 
(http://www.kurzweilai.net/meme/frame.html?m=3 and
> others).
>
> AGI builders, what evidence do you have that your design will work?
>
> This is an oft-repeated question, but I'd like to focus on two possible
> bases for saying that an invention will work before it does.
> 1. A clear, simple, mathematical theory, verified by experiment. The
> experiments can be "pure science" rather than technology tests.
> 2. Functional tests of component parts or of crude prototypes.
>
> Maybe I am missing something in the articles I have read, but do
> contemporary AGI builders have a verified theory and/or verified components
> and prototypes?
>
> Joshua
> ------------------------------
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;


------------------------------
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

------------------------------

No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.5.463 / Virus Database: 269.5.10/774 - Release Date: 23/04/2007
17:26

------------------------------
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=8eb45b07

Reply via email to