Yes, if someone was able to figure out away to get the deepmind AGI to
figure out it's own fitness function for an arbitrary atari game that would
be a major advance in the right direction IMHO.  Want to volunteer?

The fitness function for evolution seems to be survival.  I wonder if we
could use that somehow?

Maybe we could look at brain studies to get an understanding as to how
brains go about identify which things they ought to reward or not?

Maybe some people have brains that prevent them from learning?  Perhaps
such people could help us identify the brain areas associated with such
learning?  Perhaps neuroanatomy of such brain regions could help us
understand the algorithms the brain uses?

Maybe we could look at how simple creatures like worms or bees learn, and
then try to implement this as our fitness algorithm?

On Thu, May 7, 2015 at 9:18 PM, Jim Bromer <[email protected]> wrote:

> On Thu, May 7, 2015 at 12:07 PM, Benjamin Kapp <[email protected]> wrote:
>
>> This AGI algorithm can learn to play arbitrary artari games.  So it isn't
>> hardcoded to play chess or something like that.  Its input/output is the
>> same as a human player, as it sees pixels and "presses" buttons to control
>> the game.  So it doesn't have access to the internals of the game code or
>> anything like that.  This is the thing Google just spent several billion
>> dollars to acquire.  They are working on 3d games next, and trying to
>> figure out how to store memories.  How could we get this to work on 3d
>> games and store memories?
>> http://www.nature.com/nature/journal/v518/n7540/full/nature14236.html
>> https://github.com/kuz/DeepMind-Atari-Deep-Q-Learner
>>
>
> Benjamin,
> I see that you don't understand what I have been talking about. I will try
> to explain it a little more clearly.
> An AI program that can learn to play different Atari games does have a
> certain range of generality. I agree with that point of view.
> But, a program that learns from a single definite scoring variable is
> what I think is typical of 'narrow AI'. For example, the program is
> effectively hardcoded to use the score as a reinforcement scheme. If you
> think about this you may understand my point of view a little better.
>
> My attempt to distinguish between 'narrow AI' variables and 'Semi-Strong
> AI' variables (or AGI variables) is not going to get very far. It is much
> too subtle to work it out. But I can use the attempt to help me think about
> what the simplified *essential* features of a Semi-Strong AI program  might
> be. So for example I might express this as saying that a simple
> reinforcement scheme which uses a value within a simple strongly typed
> variable like an 'Integer' is not an example of the essence of an AGI
> program object. (A simple reinforcement scheme using a simple typed
> variable might be part of an AGI program but it is not substantial enough
> to be used as the basis for an AGI program or a Semi-Strong AI program.)
>
> Finally, I am making the case in this message thread that since we cannot
> just jump into a running AGI program we have to design preliminary tests to
> try to *begin* to substantiate how some simple program objects might be
> used to build a Semi-Strong AI program. For instance, our program is going
> to need to base its learning on an ability to infer the relevance of events
> that occur in the IO data environment so why not start out by making the
> (simple) program that you described so that it has to discover that the
> scoreboard can be used as a reinforcement or a directive of success.
>
> Now suppose that a programmer took me up on my challenge and did just
> that. Then the next challenge would be to design the program so that it can
> derive inferences on different kinds of indicators (in the IO data
> environment) so that the problem is a little more complicated. We can stay
> with the game design it is just that the game would be more difficult for
> the would be AGI program. Now, as long as my challenges were within reason,
> we might say that if the program was able to meet these successive
> challenges that would indicate that we were heading in the right direction.
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/26973278-698fd9ee> |
> Modify
> <https://www.listbox.com/member/?&;>
> Your Subscription <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to