I partially agree and partially disagree with your premise that the structure 
of concepts must be dynamically created.  I believe prototype structures must 
be predefined, but actual instances can be created on the fly.
~PM    +$0.02

> Date: Sat, 4 Apr 2015 10:14:54 -0400
> Subject: Re: [agi] Continuous Reality Checking
> From: [email protected]
> To: [email protected]
> 
> I think the question of how an effective AGI program can be
> constructed is still unanswered. Even supposing (as I do) that you can
> start with simple programs (that are not going to be powerful) you
> still have to answer the question of how the program can create models
> of reality before you can check them. The article, Hybrid Automata for
> Formal Modeling and Verification of Cyber-Physical Systems, that you
> mentioned looks very interesting and the fact that they are writing
> about something that is based on actual experiences is helpful.
> However, since their modeling basis does not look entirely relevant,
> you have to wonder if they are going to be able to answer the most
> important questions that we -should be-asking.
> 
> I believe that most decision models are what I have tried to call
> funnel methods. They try to funnel every decision down to an overly
> simplistic method. So you can have discrete logic-like methods of
> decisions and you can have weighted methods and that is pretty much
> it. At this point, when I try to offer my ideas and add a little
> competitive ego in order to try to get people to respond, I always end
> up with the - well, we already thought  of that - kind of response.
> That particular response goes along with the sense that our system of
> modeling can incorporate that. So in other words, using weighted
> reasoning (fuzzy logic, probability and so on) is good enough because
> it can incorporate any new ideas about decision making and reasoning
> that someone might develop. However, that particular form of hand
> waving is where something like your 'reality checking' will inevitably
> be reduced to more narrow methods of traditional AI.
> 
> I have tried to point out that the 'structure' of concepts must be
> dynamically constructed, and therefore the whole idea that concept
> structure is something that can be *entirely* predefined is just not
> sound.  So, what I am trying to say is, that if you are going to have
> your program do some reality checking then it is going to have to be
> examining the structural assumptions of the model as well. This leads
> to some major questions.
> 
> At any rate, I think it is best to start with a simple semi-agi
> program just so I can test an idea like this with simple 'realities'.
> 
> Jim Bromer
> 
> On Sat, Apr 4, 2015 at 7:16 AM, John Rose <[email protected]> wrote:
> >
> > Where I was going with this was - AGI dynamically builds an internal 
> > representation of something ... say an observed electromechanical 
> > machine... and it needs to interact with it and make decisions based on the 
> > representation. The "consensus reality" I referred to would be the 
> > commonality of representational model of the system observed by disparate 
> > intelligent macro-agents, those include people, and the reality checking is 
> > to also ensure of the safeness of any actions performed based on the 
> > decisions gleaned from the model. Disregarding however the model exists in 
> > the AGI mind you still need to interact with the physical world 
> > periodically to learn and dynamically adjust the model such that prediction 
> > is maintained and improved. That is even for complex systems 
> > nondeterministic models like economic models. I’m not for preprogramming 
> > models but preprogramming the ability to dynamically construct models or 
> > dynamically emerge models from a more generalized internal systems 
> > representational capability.
> >
> >
> >
> > John
> >
> >
> >
> > From: Anastasios Tsiolakidis [mailto:[email protected]]
> > Sent: Monday, March 30, 2015 12:00 PM
> > To: AGI
> > Subject: Re: [agi] Continuous Reality Checking
> >
> >
> >
> >
> >
> > On Sat, Mar 28, 2015 at 11:54 AM, John Rose <[email protected]> wrote:
> >
> > AGI needs to keep in touch with a consensus reality
> >
> >
> >
> > I very much doubt it, I think it is enough for AGI to "prove" it can 
> > develop increasingly better views of its reality, at its own pace, if an 
> > AGI passed the Turing test taking an hour for each of its answers I'd be OK 
> > with that. Also, taking a clue from biology, we do seem to have the fast 
> > and the slow nervous system as well as engineering solutions like the 
> > retina and optic nerve being "glued" on the brain rather than the ankles, 
> > in no way pretending to do reality acquisition, rather defining "real time" 
> > by their own working tempo, anywhere from 20 to 100 fps. Perhaps the brain 
> > can do some reading at 10 and 20 fps but none at 100fps, and that's that, 
> > there seems to be no cache that, after closing one's eyes, allows the 
> > processing of a backlog. But we do have a backlog mechanism when it comes 
> > to General Intelligence, we can close our eyes and work through a problem 
> > domain for seconds, minutes or even longer, with no real-time constraints 
> > really.
> >
> > Now, the "social" aspect of intelligence is very important, and it is great 
> > that we could both in a split second agree about the contents in a room, 
> > while even 10 years in a room would not be enough to agree on Palestine or 
> > the Greek debt. For real-world mixed man-machine applications it would be 
> > important to achieve human-time performance, but not "of the kind" that 
> > would look like a physics simulation forcing us to tackle some 
> > analog-to-digital issues in the sense of the article you are quoted. For 
> > what it's worth I believe that very sparse and cruel representations will 
> > suffice or even are necessary. I am also convinced that "multiresolution" 
> > representations will have to be included in any design, analogous to our 
> > short and long term memory - I am just a bit skeptical of trying to 
> > "program in" our mind models, for example limiting the artificial short 
> > term memory to 7 items.
> >
> > All of the above applies to a kind of "finished product". But during the 
> > design and evolution of an AGI I have stated elsewhere that indeed one 
> > could ride the real-time horse, emphasizing the responsiveness of the 
> > machine. One could, for example, explore the capabilities of one of these 
> > enormous CUDA cards or FPGAs, acknowledging that you could never respond 
> > faster than so many nanoseconds (FPGAs being much slower). Then again, the 
> > optimal "system dimensioning" is the one that includes your sensors and 
> > actuators, if you can only send commands to your motor 10 times a second, 
> > then why would you read your body temperature 100 times a second and 
> > analyze your state 1000 times a second, you are better off running the 
> > analysis a split second before sending it to the actuator and use 99% of 
> > your "horsepower" for something else, but what! Biological evolution in 
> > these cases follows closely the physical constraints of survival and 
> > reproduction and would not couple a nanosecond brain with a millisecond 
> > muscle. Our engineering is much more arbitrary and we would find something 
> > to do with the extra cycles, probably involving longer time scales. However 
> > that "something" would not have to consider analog domains at all.
> >
> >
> >
> > AT
> >
> > AGI | Archives | Modify Your Subscription
> >
> >
> >
> > AGI | Archives | Modify Your Subscription
> 
> 
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/19999924-4a978ccc
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to