Re: [agi] A question on the symbol-system hypothesis

2006-11-22 Thread James Ratcliff
Agreed, but I think as a first level project I can accept the limitiation of modeliing the AI 'as' a human, as we are a long way off of turning it loose as its own robot, and this will allow it to act and reason more as we do. Currently I have PersonAI as a subset of Person, where it will

Re: [agi] A question on the symbol-system hypothesis

2006-11-22 Thread Bob Mottram
Goals don't necessarily need to be complex or even explicitly defined. One goal might just be to minimise the difference between experiences (whether real or simulated) and expectations. In this way the system learns what a normal state of being is, and detect deviations. On 21/11/06,

Re: [agi] A question on the symbol-system hypothesis

2006-11-22 Thread Charles D Hixson
I don't know that I'd consider that an example of an uncomplicated goal. That seems to me much more complicated than simple responses to sensory inputs. Valuable, yes, and even vital for any significant intelligence, but definitely not at the minimal level of complexity. An example of a

Re: [agi] A question on the symbol-system hypothesis

2006-11-22 Thread Bob Mottram
Things like finding recharging sockets are really more complex goals built on top of more primitive systems. For example, if a robot heading for a recharging socket loses a wheel its goals should change from feeding to calling for help. If it cannot recognise a deviation from the normal state

Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-22 Thread Ben Goertzel
Well, in the language I normally use to discuss AI planning, this would mean that 1)keeping charged is a supergoal 2) The system knows (via hard-coding or learning) that finding the recharging socket == keeping charged (i.e. that the former may be considered a subgoal of the latter) 3) The

Re: [agi] Information extraction from inputs and an experimental path forward

2006-11-22 Thread William Pearson
On 21/11/06, Pei Wang [EMAIL PROTECTED] wrote: That sounds better to me. In general, I'm against attempts to get complete, consistent, certain, and absolute descriptions (of either internal or external state), and prefer partial, not-necessarily-consistent, uncertain, and relative ones --- not

Re: Re: [agi] A question on the symbol-system hypothesis

2006-11-22 Thread Mike Dougherty
On 11/22/06, Ben Goertzel [EMAIL PROTECTED] wrote: Well, in the language I normally use to discuss AI planning, this would mean that 1)keeping charged is a supergoal 2)The system knows (via hard-coding or learning) that finding the recharging socket == keeping charged If charged becomes