Whoa!!
~PM

Date: Sat, 21 Dec 2013 13:43:19 -0600
Subject: Re: [agi] A Random Thought...
From: [email protected]
To: [email protected]

Ok, which dimension are you attempting to scale up? Triviality corresponds to 
minimal representational capacity, both in the environment and the agent 
operating within it. Human beings are (currently) at the other end of that 
scale, with enormous representational capacity for dealing with a highly 
complex environment. The more complex (non-trivial) the environment, the 
greater the representational capacity required of agents operating within it in 
order to effectively make decisions. It is this dimension that I am looking at.

Learning algorithms are easy to understand, design, and implement. They are 
just solutions to optimization problems. I do not think learning itself is 
where the bottleneck lies. Instead I look at the representational systems 
underlying those learning algorithms. The simplest learning algorithms operate 
over tables of choices. They tabulate expected returns or error levels for each 
choice, over many repetitions, and gradually settle on the choice(s) with the 
maximum expected return or minimum expected error level. Adding layers of 
sophistication, we begin to see context matter more and more: Conditional 
choices and statefulness result in much more interesting and coherent behavior. 
Generalizing over choices and conditions and actions to those that are similar, 
we see an additional gain in coherency, with algorithms that can deal with new 
situations robustly based on previous experience with other situations.

What is needed is to increase the expressivity of the underlying 
representational schemes used by learning algorithms. Moving up to the 
representational complexity level of ontologies, episodic memory, etc., the 
representational scheme becomes ever more capable. In order to reason about 
things, we need to represent those things effectively. Once we have a fully 
capable representational scheme -- a programmatic framework for the 
representation of Meaning, in all its forms, with all its inherent ambiguities 
-- we can begin writing learning algorithms to extract meaning from the 
environment, generate rules for predicting arbitrary unobserved phenomena from 
arbitrary observed phenomena, recombining meanings to produce new ones, 
choosing contextually appropriate and meaningful behavior, etc. There is no 
understanding without meaning, and there is no intelligence without 
understanding.


                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to