Re: [agi] Re: Representing Thoughts

2005-09-23 Thread Yan King Yin
However science is also a form of competition  between agents (humansbeing a type of agent), the winner being the most cited.   Let us say that your type of Intelligence becomes prevalent, it wouldbecome very easy to predict what this type of intelligence would find interesting (just feed it all t

[agi] Re: Representing Thoughts

2005-09-23 Thread William Pearson
On 9/20/05, Yan King Yin <[EMAIL PROTECTED]> wrote: > William wrote: > > I suspect that it will be quite important in competition between agents. If > > > one agent has a constant method of learning it will be more easily > predicted > > by an agent that can figure out its constant method (if it i

Re: [agi] Re: Representing Thoughts

2005-09-20 Thread Yan King Yin
William wrote:   I suspect that it will be quite important in competition between agents. If one agent has a constant method of learning it will be more easily predicted by an agent that can figure out its constant method (if it is simple). If it changes  (and changes how it changes), then it will

[agi] Re: Representing Thoughts

2005-09-12 Thread William Pearson
On 9/12/05, Yan King Yin <[EMAIL PROTECTED]> wrote: > Will Pearson wrote: > > Define what you mean by an AGI. Learning to learn is vital if you wish to > > try and ameliorate the No Free Lunch theorems of learning. > > I suspect that No Free Lunch is not very relevant in practice. Any learning

Re: [agi] Re: Representing Thoughts

2005-09-12 Thread Yan King Yin
  Will Pearson wrote: Define what you mean by an AGI. Learning to learn is vital if you wish to try and ameliorate the No Free Lunch theorems of learning.   I suspect that No Free Lunch is not very relevant in practice.  Any learning algorithm has its implicit way of generalization and it may tu

[agi] Re: Representing Thoughts

2005-09-09 Thread William Pearson
On 9/9/05, Yan King Yin <[EMAIL PROTECTED]> wrote: > "learning to learn" which I interpret as applying the current knowledge > rules to the knowledge base itself. Your idea is to build an AGI that can > modify its own ways of learning. This is a very fanciful idea but is not the > > most direct

Re: [agi] Re: Representing Thoughts

2005-09-09 Thread Eugen Leitl
On Fri, Sep 09, 2005 at 12:30:11PM +, William Pearson wrote: > Does evolution have the the lowest level of inference that you talked > about? Or would it be better characterised as self-modifying (e.g. > crossover that can alter the mechanics of crossover). It is a meta-method. Mutation with

[agi] Re: Representing Thoughts

2005-09-09 Thread William Pearson
On 9/9/05, Ben Goertzel <[EMAIL PROTECTED]> wrote: > > > Leitl wrote: > > > >In the language of Gregory Bateson (see his book "Mind and Nature"), > > > >you're suggesting to do away with "learning how to learn" --- which is > > > >not at all a workable idea for AGI. > > > > Learning to evolve by

[agi] Re: Representing Thoughts

2005-09-04 Thread SCN User
Yan King Yin wrote: > One of the central issues in AGI would be > how thoughts are represented. http://mind.sourceforge.net/mind4th.html -- Mind.Forth -- represents thoughts as associative chains within a "Psi" conceptual array. Mind.Forth has made immense progress thus far in 2005. To run Min