So what I gather is that you formulate hypotheses and you experimentally check 
the hypotheses (in some way).
But you do not do that for all elements of thought, or for all your activated 
concepts.  So this  experimental "checking" is a special case procedure. 
I don't think the majority of your exterioceptions, interioceptions, or 
proprioceptions require such "reality checking".
~PM

Date: Sun, 5 Apr 2015 17:49:28 -0400
Subject: Re: [agi] Continuous Reality Checking
From: [email protected]
To: [email protected]

On Sun, Apr 5, 2015 at 1:16 PM, Piaget Modeler <[email protected]> 
wrote:
What do you mean by "checking a concept"?  How does one go about doing that? 
How do people check concepts embedded in their neuron network? How doyou do it? 
Kindly advise.
~PM


My best guess is that the last question was about how I check concepts in my 
mind.  For example, I have different ideas about your motivation for asking 
this question. These ideas are chiaroscuros and it could be a little awkward if 
I started describing them. They are generalizations which might be applied to 
many different people and to other kinds of situations. I have a great many of 
them available and I have other generalizations which can act as modifiers on 
them and so these generalizations can be shaped by using other generalizations. 
This is a component model of intelligence and so a relatively 'simple' concept 
may be distributed and further knowledge about these characteristic 
generalizations are not going to be typically stored along with the ad hoc 
collection of ideas that I have formed and are forming (about why you asked 
that question in just the way you did and so on.) When you respond (or 
eventually) I will have a little better idea which of these sketchy ideas (I 
had about why you asked these questions) was most accurate if any of them were. 
It is not a simple case of choosing which guess was best but of applying 
weights to the different theories as best as I can figure it out based on some 
future remarks that you might make. But it is still guess work because the 
chances that you will actually relate something to one of my ideas (about you) 
is very unlikely. Of course only a little of this is truly conscious. 
But, I can also take these evaluations and apply them to other people and other 
situations. What are the rules for making these future applications of theories 
about characteristic motivations? I don't know exactly but I assume that I will 
be looking for similar situations. So after I come to a conclusion about why 
you asked those questions in the way you did I will look at a shaped idea and 
presumably be ready to apply it to someone else in one of these discussion  
groups. However, that presumptive application via similarity might not be the 
right model to use (depending on how the idea is shaped.) So, I might actually 
make an adjustment to the structure of how I apply a shaped idea (about an AI 
discussion group participant) to another person or situation. I can (obviously) 
change the rule or modify the application process a little. The point is that 
the methodology of generalizing an idea is not entirely preset. And it is clear 
that ideas (or concepts) can interact with other concepts and that these 
interactions may be shaped by judgment and knowledge.Jim Bromer

On Sun, Apr 5, 2015 at 1:16 PM, Piaget Modeler <[email protected]> 
wrote:



What do you mean by "checking a concept"?  How does one go about doing that? 
How do people check concepts embedded in their neuron network? How doyou do it? 
Kindly advise.
~PM

> Date: Sun, 5 Apr 2015 07:44:08 -0400
> Subject: Re: [agi] Continuous Reality Checking
> From: [email protected]
> To: [email protected]
> 
> In one sense an AGI program should always be examining how its
> 'learned' behavior is working when applied to the 'real world'.  These
> studies would not always be intensive or occur all of the time.
> 
> I also think that there is a problem since the same sort of methods
> that are used to develop concepts would be used in checking them. But
> given that the program is capable of some genuine learning the ideas
> used in reexamining a model might be quite different than those used
> in originally creating it. Of course an AGI program would be capable
> of, and tend to utilize, different pathways of thought so it does not
> even have to learn anything new to use different strategies when
> checking some concept against the 'reality'.
> 
> In reason based reasoning (I think some people may still get annoyed
> by that particular idea) the reasons behind a concept (behind the
> structural relations of concepts needed to represent concept-like
> knowledge) may be examined and that can lead to different speculations
> about the concept-like knowledge. So this is one way that checking
> with the real world, or a consensus about the real world, can lead to
> some new insights. However, because this 'reality' checking is not
> keyed to some pre-set basis it will lead to more difficulties of
> choosing which ideas are more sound.
> Jim Bromer
> 
> 
> On Sat, Apr 4, 2015 at 9:18 PM, John Rose <[email protected]> wrote:
> >> -----Original Message-----
> >> From: Jim Bromer [mailto:[email protected]]
> >>
> >> I think the question of how an effective AGI program can be constructed is 
> >> still
> >> unanswered. Even supposing (as I do) that you can start with simple 
> >> programs
> >> (that are not going to be powerful) you still have to answer the question 
> >> of how
> >> the program can create models of reality before you can check them. The
> >> article, Hybrid Automata for Formal Modeling and Verification of 
> >> Cyber-Physical
> >> Systems, that you mentioned looks very interesting and the fact that they 
> >> are
> >> writing about something that is based on actual experiences is helpful.
> >> However, since their modeling basis does not look entirely relevant, you 
> >> have to
> >> wonder if they are going to be able to answer the most important questions 
> >> that
> >> we -should be-asking.
> >>
> >
> > The paper is just an example of what can be done with hybrid automata based 
> > model checking. It's not really meant to answer deep AGI questions but 
> > rather be just a simple single color example in a broad spectrum of colors. 
> > Where I noticed this is in my research of automata based multi-agent 
> > emergent systems thinking trying to define agent structure using tuples. A 
> > more grandiose model checking system would not appear as specific... models 
> > might not exist as delineated entities in representation.
> >
> > To your point though - I think a program can create models in reality by 
> > initially participating in a consensus reality de facto and since we create 
> > the program and are participating members of the consensus as are base 
> > physical systems and involved virtual systems lended to the functional 
> > inception. Post "incubation" AGI reality pulls perception into it, so to 
> > say, models represented could effectively emerge through realizing a 
> > structural potential in a hosted abstraction medium perhaps through a 
> > reaction-diffusion morphogenesis? In a multi-agent emergence scenario that 
> > is...
> >
> > John
> >
> >
> >
> >
> > -------------------------------------------
> > AGI
> > Archives: https://www.listbox.com/member/archive/303/=now
> > RSS Feed: https://www.listbox.com/member/archive/rss/303/24379807-653794b5
> > Modify Your Subscription: https://www.listbox.com/member/?&;
> > Powered by Listbox: http://www.listbox.com
> 
> 
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/19999924-4a978ccc
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
                                          


  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  








  
    
      
      AGI | Archives

 | Modify
 Your Subscription


      
    
  

                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to