In one sense an AGI program should always be examining how its
'learned' behavior is working when applied to the 'real world'.  These
studies would not always be intensive or occur all of the time.

I also think that there is a problem since the same sort of methods
that are used to develop concepts would be used in checking them. But
given that the program is capable of some genuine learning the ideas
used in reexamining a model might be quite different than those used
in originally creating it. Of course an AGI program would be capable
of, and tend to utilize, different pathways of thought so it does not
even have to learn anything new to use different strategies when
checking some concept against the 'reality'.

In reason based reasoning (I think some people may still get annoyed
by that particular idea) the reasons behind a concept (behind the
structural relations of concepts needed to represent concept-like
knowledge) may be examined and that can lead to different speculations
about the concept-like knowledge. So this is one way that checking
with the real world, or a consensus about the real world, can lead to
some new insights. However, because this 'reality' checking is not
keyed to some pre-set basis it will lead to more difficulties of
choosing which ideas are more sound.
Jim Bromer


On Sat, Apr 4, 2015 at 9:18 PM, John Rose <[email protected]> wrote:
>> -----Original Message-----
>> From: Jim Bromer [mailto:[email protected]]
>>
>> I think the question of how an effective AGI program can be constructed is 
>> still
>> unanswered. Even supposing (as I do) that you can start with simple programs
>> (that are not going to be powerful) you still have to answer the question of 
>> how
>> the program can create models of reality before you can check them. The
>> article, Hybrid Automata for Formal Modeling and Verification of 
>> Cyber-Physical
>> Systems, that you mentioned looks very interesting and the fact that they are
>> writing about something that is based on actual experiences is helpful.
>> However, since their modeling basis does not look entirely relevant, you 
>> have to
>> wonder if they are going to be able to answer the most important questions 
>> that
>> we -should be-asking.
>>
>
> The paper is just an example of what can be done with hybrid automata based 
> model checking. It's not really meant to answer deep AGI questions but rather 
> be just a simple single color example in a broad spectrum of colors. Where I 
> noticed this is in my research of automata based multi-agent emergent systems 
> thinking trying to define agent structure using tuples. A more grandiose 
> model checking system would not appear as specific... models might not exist 
> as delineated entities in representation.
>
> To your point though - I think a program can create models in reality by 
> initially participating in a consensus reality de facto and since we create 
> the program and are participating members of the consensus as are base 
> physical systems and involved virtual systems lended to the functional 
> inception. Post "incubation" AGI reality pulls perception into it, so to say, 
> models represented could effectively emerge through realizing a structural 
> potential in a hosted abstraction medium perhaps through a reaction-diffusion 
> morphogenesis? In a multi-agent emergence scenario that is...
>
> John
>
>
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/24379807-653794b5
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to