The software of the brain is the hardware.  That is to say the neurons and
their firing and not firing and their connections between each other are
the software of the brain.  The wiring diagram of the eye is relatively
predetermined as is that of the brain at least initially.  For example
normal humans have a corpus callosum which is a bunch of neuron connections
across the two hemispheres.  Without this humans seem to have two
independent brains.  And so if we were to say look at this big connection
between the two hemispheres, it shouldn't be rebuffed "oh you are just
looking at hardware and can't make any determinations as to pre-wiring of
the software".. because that is exactly what the corpus callosum is, and
more generally what the gross anatomical regularities found across brains
is.

On Sat, Apr 4, 2015 at 2:57 PM, Jim Bromer <[email protected]> wrote:

> On Sat, Apr 4, 2015 at 2:45 PM, Benjamin Kapp <[email protected]> wrote:
>
>> It seems to me that the brain is hard wired in certain ways.  For example
>> the eye is considered part of the brain and its design is largely entirely
>> predetermined.  And the brain as a whole as the same folds in the same
>> places across brains (on average), and not having such a design often leads
>> to abnormal mental function.  In so far as the brain is a model for how to
>> go about creating AGI perhaps this stands as example that not everything
>> needs to be dynamically created by experience.
>>
>
>
> That is not right. If you look at the hardware of a computer you would not
> be able to infer the dynamic relations (and the potential of the dynamic
> references and abstractions) of the software. (This is an old AI discussion
> group standard argument by the way,) So we cannot be sure exactly what the
> human brain is doing. (That's a I-don't-know-I only work here kind of
> argument.)
>
>
>
>
>>
>> On Sat, Apr 4, 2015 at 2:39 PM, Jim Bromer <[email protected]> wrote:
>>
>>> I certainly don't mean that all structure has to be ad hoc. Some
>>> structure has to be implemented and there is no reason why default
>>> conceptualizations might be used, but in general the structure of a concept
>>> and the conceptual background that the concept is going to be applied to
>>> has to be learned. Just as I did not mean that all conceptual structure has
>>> to be ad hoc I also don't mean to suggest that all concepts can be acquired
>>> as individual objects.
>>>
>>>
>>> Jim Bromer
>>>
>>> On Sat, Apr 4, 2015 at 1:59 PM, Piaget Modeler <
>>> [email protected]> wrote:
>>>
>>>> I partially agree and partially disagree with your premise that the
>>>> structure
>>>> of concepts must be dynamically created.  I believe prototype
>>>> structures
>>>> must be predefined, but actual instances can be created on the fly.
>>>>
>>>> ~PM    +$0.02
>>>>
>>>> > Date: Sat, 4 Apr 2015 10:14:54 -0400
>>>> > Subject: Re: [agi] Continuous Reality Checking
>>>> > From: [email protected]
>>>> > To: [email protected]
>>>>
>>>> >
>>>> > I think the question of how an effective AGI program can be
>>>> > constructed is still unanswered. Even supposing (as I do) that you can
>>>> > start with simple programs (that are not going to be powerful) you
>>>> > still have to answer the question of how the program can create models
>>>> > of reality before you can check them. The article, Hybrid Automata for
>>>> > Formal Modeling and Verification of Cyber-Physical Systems, that you
>>>> > mentioned looks very interesting and the fact that they are writing
>>>> > about something that is based on actual experiences is helpful.
>>>> > However, since their modeling basis does not look entirely relevant,
>>>> > you have to wonder if they are going to be able to answer the most
>>>> > important questions that we -should be-asking.
>>>> >
>>>> > I believe that most decision models are what I have tried to call
>>>> > funnel methods. They try to funnel every decision down to an overly
>>>> > simplistic method. So you can have discrete logic-like methods of
>>>> > decisions and you can have weighted methods and that is pretty much
>>>> > it. At this point, when I try to offer my ideas and add a little
>>>> > competitive ego in order to try to get people to respond, I always end
>>>> > up with the - well, we already thought of that - kind of response.
>>>> > That particular response goes along with the sense that our system of
>>>> > modeling can incorporate that. So in other words, using weighted
>>>> > reasoning (fuzzy logic, probability and so on) is good enough because
>>>> > it can incorporate any new ideas about decision making and reasoning
>>>> > that someone might develop. However, that particular form of hand
>>>> > waving is where something like your 'reality checking' will inevitably
>>>> > be reduced to more narrow methods of traditional AI.
>>>> >
>>>> > I have tried to point out that the 'structure' of concepts must be
>>>> > dynamically constructed, and therefore the whole idea that concept
>>>> > structure is something that can be *entirely* predefined is just not
>>>> > sound. So, what I am trying to say is, that if you are going to have
>>>> > your program do some reality checking then it is going to have to be
>>>> > examining the structural assumptions of the model as well. This leads
>>>> > to some major questions.
>>>> >
>>>> > At any rate, I think it is best to start with a simple semi-agi
>>>> > program just so I can test an idea like this with simple 'realities'.
>>>> >
>>>> > Jim Bromer
>>>> >
>>>> > On Sat, Apr 4, 2015 at 7:16 AM, John Rose <[email protected]>
>>>> wrote:
>>>> > >
>>>> > > Where I was going with this was - AGI dynamically builds an
>>>> internal representation of something ... say an observed electromechanical
>>>> machine... and it needs to interact with it and make decisions based on the
>>>> representation. The "consensus reality" I referred to would be the
>>>> commonality of representational model of the system observed by disparate
>>>> intelligent macro-agents, those include people, and the reality checking is
>>>> to also ensure of the safeness of any actions performed based on the
>>>> decisions gleaned from the model. Disregarding however the model exists in
>>>> the AGI mind you still need to interact with the physical world
>>>> periodically to learn and dynamically adjust the model such that prediction
>>>> is maintained and improved. That is even for complex systems
>>>> nondeterministic models like economic models. I’m not for preprogramming
>>>> models but preprogramming the ability to dynamically construct models or
>>>> dynamically emerge models from a more generalized internal systems
>>>> representational capability.
>>>> > >
>>>> > >
>>>> > >
>>>> > > John
>>>> > >
>>>> > >
>>>> > >
>>>> > > From: Anastasios Tsiolakidis [mailto:[email protected]]
>>>> > > Sent: Monday, March 30, 2015 12:00 PM
>>>> > > To: AGI
>>>> > > Subject: Re: [agi] Continuous Reality Checking
>>>> > >
>>>> > >
>>>> > >
>>>> > >
>>>> > >
>>>> > > On Sat, Mar 28, 2015 at 11:54 AM, John Rose <
>>>> [email protected]> wrote:
>>>> > >
>>>> > > AGI needs to keep in touch with a consensus reality
>>>> > >
>>>> > >
>>>> > >
>>>> > > I very much doubt it, I think it is enough for AGI to "prove" it
>>>> can develop increasingly better views of its reality, at its own pace, if
>>>> an AGI passed the Turing test taking an hour for each of its answers I'd be
>>>> OK with that. Also, taking a clue from biology, we do seem to have the fast
>>>> and the slow nervous system as well as engineering solutions like the
>>>> retina and optic nerve being "glued" on the brain rather than the ankles,
>>>> in no way pretending to do reality acquisition, rather defining "real time"
>>>> by their own working tempo, anywhere from 20 to 100 fps. Perhaps the brain
>>>> can do some reading at 10 and 20 fps but none at 100fps, and that's that,
>>>> there seems to be no cache that, after closing one's eyes, allows the
>>>> processing of a backlog. But we do have a backlog mechanism when it comes
>>>> to General Intelligence, we can close our eyes and work through a problem
>>>> domain for seconds, minutes or even longer, with no real-time constraints
>>>> really.
>>>> > >
>>>> > > Now, the "social" aspect of intelligence is very important, and it
>>>> is great that we could both in a split second agree about the contents in a
>>>> room, while even 10 years in a room would not be enough to agree on
>>>> Palestine or the Greek debt. For real-world mixed man-machine applications
>>>> it would be important to achieve human-time performance, but not "of the
>>>> kind" that would look like a physics simulation forcing us to tackle some
>>>> analog-to-digital issues in the sense of the article you are quoted. For
>>>> what it's worth I believe that very sparse and cruel representations will
>>>> suffice or even are necessary. I am also convinced that "multiresolution"
>>>> representations will have to be included in any design, analogous to our
>>>> short and long term memory - I am just a bit skeptical of trying to
>>>> "program in" our mind models, for example limiting the artificial short
>>>> term memory to 7 items.
>>>> > >
>>>> > > All of the above applies to a kind of "finished product". But
>>>> during the design and evolution of an AGI I have stated elsewhere that
>>>> indeed one could ride the real-time horse, emphasizing the responsiveness
>>>> of the machine. One could, for example, explore the capabilities of one of
>>>> these enormous CUDA cards or FPGAs, acknowledging that you could never
>>>> respond faster than so many nanoseconds (FPGAs being much slower). Then
>>>> again, the optimal "system dimensioning" is the one that includes your
>>>> sensors and actuators, if you can only send commands to your motor 10 times
>>>> a second, then why would you read your body temperature 100 times a second
>>>> and analyze your state 1000 times a second, you are better off running the
>>>> analysis a split second before sending it to the actuator and use 99% of
>>>> your "horsepower" for something else, but what! Biological evolution in
>>>> these cases follows closely the physical constraints of survival and
>>>> reproduction and would not couple a nanosecond brain with a millisecond
>>>> muscle. Our engineering is much more arbitrary and we would find something
>>>> to do with the extra cycles, probably involving longer time scales. However
>>>> that "something" would not have to consider analog domains at all.
>>>> > >
>>>> > >
>>>> > >
>>>> > > AT
>>>> > >
>>>> > > AGI | Archives | Modify Your Subscription
>>>> > >
>>>> > >
>>>> > >
>>>> > > AGI | Archives | Modify Your Subscription
>>>> >
>>>> >
>>>> > -------------------------------------------
>>>> > AGI
>>>> > Archives: https://www.listbox.com/member/archive/303/=now
>>>> > RSS Feed:
>>>> https://www.listbox.com/member/archive/rss/303/19999924-4a978ccc
>>>> > Modify Your Subscription: https://www.listbox.com/member/?&;
>>>> > Powered by Listbox: http://www.listbox.com
>>>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>>>> <https://www.listbox.com/member/archive/rss/303/24379807-653794b5> |
>>>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>>>> <http://www.listbox.com>
>>>>
>>>
>>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>>> <https://www.listbox.com/member/archive/rss/303/26973278-698fd9ee> |
>>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>>> <http://www.listbox.com>
>>>
>>
>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/24379807-653794b5> |
>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>> <http://www.listbox.com>
>>
>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/26973278-698fd9ee> |
> Modify
> <https://www.listbox.com/member/?&;>
> Your Subscription <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to