On Tuesday 11 July 2006 18:49, James Ratcliff wrote:
> > > So my guess is that focusing on the practical level for building an agi
> > > system is sufficient, and it's easier than focusing on very abstract
> > > levels. When you have a system that can e.g. play soccer, tie shoe
> > > lases, build fences, throw objects to hit other objects, walk through a
> > > terrain to a spot, cooperate with other systems in achieving these
> > > practical goals
>
>  * The problem is a certain level of abstractness must be achieved to
> successfully carry through with all these tasks in a useful way.

That is the big problem, I agree, but not exactly the problem I wrote about.

> If we 
> teach and train a robot to open a door, and then present it with another
> type of door that opens differently, it will not be able to handle it,
> unless it can reason at a higher level, using abstract knowledge of doors,
> movement and handles.  This is very important to making a general
> intelligence.  Simple visual object detection has the same problem. It  
> seems to appear in all lines of planning, acting and reasoning processes.

Agreed.

----------------------

>
> One thing I have been working on in these regards is the use of a 'script
> system' It seems very impractical to have the AGI try and recreate these
> plans every single time, and we can use the scripts to abstract and reason
> about tasks and to create new scripts. We as humans live most of our lives
> doing very repetitive tasks, I drive to work every day, eat, work and drive
> home.  I do these things automatically, and most of the time dont put a lot
> of thought into them, I just follow the script. In the case of planning a
> trip like that, we may not know the exact details, but we know the overview
> of what to do, so we could take a script of travel planning, copy it, and
> use it as a base template for acting. 

This doesn't sound bad, but you ignore the problem of representation. In what 
representational system do you express those scripts? How do you make sure 
that a system can effectively and efficiently express effective and efficient 
plans, procedures and actions in it (avoiding the autistic representational 
systems of expert systems)? And how can a system automatically generate such 
a representational system (recursively, so that it can stepwise abstract away 
from the sensory level)? And how does it know which representational system 
is relevant in a situation?

Concept formation, how does it happen?

> This does not remove the 
> combinatorial explosion search-planning problem of having an infinite
> amount of choices for each action, but does give us a fall-back plan, if we
> are pressed for time, or cannot find another solution currently.
>
>   I am working in a small virtual world right now, and implementing a
> simple set of tasks in a house environment. Another thought I am working on
> is some kind of semi-supervised learning for the agents, and an interactive
> method for defining actions and scripts.  

Interactive Method? Why should this be called AI?

> It doesnt appear fruitful to 
> create an agent, define a huge set of actions, give it a goal, and expect
> it to successfully achieve the goal, the search pattern just gets to large,
> and it becomes concerned with an infinite variety of useless repetitive
> choices.

So, in other words, looking for an agi system is not very fruitful?

>
> After gathering a number of scripts an agent can then choose among the
> scripts, or revert down to a higher-level set of actions it can perform.

It doesn't seem to be very interesting, in the context of the agi mailing 
list.

Arnoud

-------
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to