Most of us old guys tend to have a bias toward thinking about computer
programs that are partitioned by algorithm and data or operation and
operand. Early AI can be strongly characterized by this distinction. The
distinction is somewhat obscured in Artificial Neural Networks but that is
because the network only uses one process for everything. And that is
another problem with the ANN theories. (For example,) ANNs do not
distinguish between distinct data objects of a type that they can
categorize on so they cannot - in themselves - build on the distinguishing
characteristics of the individual data objects that they recognize as being
an example of the type of object that they have been trained to recognize.
So object decomposition, like the image grammar decomposition that Ben
discussed in his paper, would help. However, the problem is that (I think)
we need dynamic methods to work with object types and individuals. We need
dynamic decompositions (or perhaps more clearly, dynamic compositions) of
their distinguishing characteristics. But part of those dynamics are
individualized algorithms that can be derived for object types (and
individuals) by learning.

There are all sorts of subtleties that can be used to challenge or
buttress this theory but the argument for the potential of dynamic
decomposition grammars is pretty strong. At the very least we cannot rule
them out without extensive testing. But if that is true then the same thing
can be said for dynamic algorithm decomposition (composition) grammars as
well. And present day ANNs just cannot do that.

Jim Bromer

On Sun, Sep 27, 2015 at 5:15 PM, Steve Richfield <[email protected]>
wrote:

> Ed, Jim, et al,
>
> The neuromorphic learning that I have seen has been orders of magnitude
> too slow. We easily learn from a single observation - with maybe a
> confirming observation. However, when we attempt to tweak present methods
> to learn that fast, they "lock up" when they exceed their capacity holding
> superstitious learning artifacts, lower levels end up producing parameters
> that are NOT the parameters that high levels need, etc.
>
> I wrote a paper for the very 1st NN conference (in San Diego) that
> proposed a way around this problem I dubbed *Instantaneous Learning* -
> but as usual people of that time, just like present-day "researchers" (who
> continue to forget the "re" in "research"), were more into fiddling than
> understanding the barriers they were up against. Hence, this particular
> barrier appears to have a solution - just not the present non-solutions.
>
> Steve
> ================
>
> On Sun, Sep 27, 2015 at 9:40 AM, Jim Bromer <[email protected]> wrote:
>
>> On Fri, Sep 25, 2015 at 9:01 AM, EdFromNH . <[email protected]> wrote:
>>
>>> Ben wrote a good paper explaining one of the reason deep learning is
>>> often weirdly flakey. (  http://goertzel.org/DeepLearning_v1.pdf  )  If
>>> an AGI's deep learning systems were organized more the way the deep
>>> learning systems are in the human brain (and as Ben's paper suggested it
>>> should be), many of these problems would be eliminated.
>>>
>>
>>
>> I liked Ben's paper and I think I agree with most of it, but I do not
>> think he has a solution to the problem. If the image grammar (kind of
>> thing) decomposition was produced at the various levels of a network for a
>> recognizable object you would probably have some information that was very
>> useful. (I think you also need to map comparative objects onto these object
>> grammar decompositions even after substantive learning but that is not the
>> main point of my criticism.) The problem is that hierarchies are not good
>> enough for AGI because knowledge is relativistic. Of course people can
>> dismiss my criticism using any number of different reasons. Knowledge
>> relativism does not mean that there are infinities of infinities that have
>> to be processed but it does mean that the program has to be able to get
>> from one point to another point in the classification interpretation
>> process, and even if the process is very close to making a good
>> interpretation the complexity distance between where it is and where it
>> needs to go might be intractable for contemporary computers. One of the
>> reasons that I am still trying to find a solution to 3-SAT is because if
>> there was a solution to that problem then a variety of
>> understandable artificial problems could be used to quickly test different
>> theoretical AGI algorithms.  I have been thinking about a cross-categorical
>> mathematical model that had endless efficiencies that could be defined as
>> an abstraction but it would have to be so idealized that it would be
>> useless (because any useful application would invalidate the majority of
>> the mathematical methods that could be realized for the particular
>> application).
>>
>> I am good at talking (about this) but I do not have much to show for it.
>> But I am really good at talking about it. What I am saying makes sense.
>> Jim Bromer
>>
>> On Fri, Sep 25, 2015 at 9:01 AM, EdFromNH . <[email protected]> wrote:
>>
>>> Jim Bromer,
>>>
>>> Ben wrote a good paper explaining one of the reason deep learning is
>>> often weirdly flakey. (  http://goertzel.org/DeepLearning_v1.pdf  )  If
>>> an AGI's deep learning systems were organized more the way the deep
>>> learning systems are in the human brain (and as Ben's paper suggested it
>>> should be), many of these problems would be eliminated.
>>>
>>> Obviously more than just deep learning is required to make an AGI.  But
>>> a properly designed deep learning system can provide a powerful and
>>> important part of an AGI.  Jeff Hawkin's hierarchical temporal memory is
>>> designed to automatically learn hierarchial hidden markov transisition
>>> networks that not only provides deep learning of perceptual patterns, but
>>> also behaviors.   In addition to deep learning an AGI needs to have other
>>> things, such as thresholding and attention focusing systems, short and long
>>> term memory mechanism (which could be built into deep learning data
>>> structures), and value systems which perform functions roughly similar to
>>> our emotions and drives.  My above post about a memristor-on-CMOS 300mm
>>> wafer with as many synapse as a human cortex, made clear that such an
>>> artificial cortex would not be enough to make an AGI.  The human cortex is
>>> comatose without the brain's subcortical components.
>>>
>>> On Thu, Sep 24, 2015 at 9:27 PM, Jim Bromer <[email protected]> wrote:
>>>
>>>> On Thu, Sep 24, 2015 at 1:56 PM, EdFromNH . <[email protected]> wrote:
>>>>
>>>>> Given that we already know how to perform deep learning, and many
>>>>> other AGI algorithms,  efficiently on neural net hardware, I should think
>>>>> the people on this mailing list who are truly interested in AGI would be
>>>>> extremely interested in the advances in neuromorphic computing.
>>>>>
>>>>
>>>> I think the mailing list has been depleted of much of the interest in
>>>> the field that it once had.
>>>>
>>>> I agree that AI methods that can be applied to a lot of different kinds
>>>> of problems that require intelligence are AGI algorithms because they are
>>>> general AI algorithms. However, the denial or the lack of recognition of
>>>> the significance of the fact that deep learning has not really achieved
>>>> human-like reasoning is a little curious. If my memory is working (it does
>>>> not always work that well) deep learning is often designed to combine
>>>> discrete methods, and more dramatically for this kind of criticism, many
>>>> narrow problem class methods with neural networks. So it is like you are
>>>> ignoring all the specific narrow aspects of various projects in the field
>>>> (as well as discrete AI methods) that have been essential to generating the
>>>> wow factor in deep learning.
>>>>
>>>> The presumption that the neuromorphic methods that were mentioned would
>>>> naturally succeed because they would somehow represent and even transcend
>>>> natural nerve systems is a little silly.  You do recognize that there is
>>>> more to it but I seem to get the feeling that you don't appreciate the
>>>> limitations of the deep learning methods. To make this a little more
>>>> understandable, I am not saying that those limitations are fixed but that
>>>> it is the kinds of limitations that are important. Technological
>>>> sophistication is not going to solve the problems without the
>>>> more-to-it-than-that stuff.
>>>>
>>>> Jim Bromer
>>>>
>>>> On Thu, Sep 24, 2015 at 1:56 PM, EdFromNH . <[email protected]> wrote:
>>>>
>>>>> Thanks, justcamel.  I did not know that.
>>>>>
>>>>> I am amazed there has not been more discussion on this list about my
>>>>> neuromorphic post above.  The article I described in it, along with other
>>>>> articles I have read, imply we may well be within a relatively few years
>>>>> from having hardware with almost twice as many neurons and synapses as the
>>>>> human cortex on one 300mm silicon wafer, which could be manufactured at a
>>>>> marginal cost of $7,000 to $15,000, and would only consume about one
>>>>> kilowatt. Of course, there is more to making a roughly human-level AGI 
>>>>> than
>>>>> that, but such relatively inexpensive and incredibly powerful AGI hardware
>>>>> could greatly accelerate the advent of machine superintelligence.
>>>>>
>>>>> Given that we already know how to perform deep learning, and many
>>>>> other AGI algorithms,  efficiently on neural net hardware, I should think
>>>>> the people on this mailing list who are truly interested in AGI would be
>>>>> extremely interested in the advances in neuromorphic computing.
>>>>> Neuromorphic computing is almost certainly is the path that will lead to
>>>>> powerful AGI.  But based on the deafening silent response of this mailing
>>>>> list to my above post, it seems not.
>>>>>
>>>>> On Wed, Sep 23, 2015 at 7:09 AM, justcamel <[email protected]>
>>>>> wrote:
>>>>>
>>>>>> Your own contributions to the mailing list don not end up in your
>>>>>> inbox ... just check out the mailing list "directly" ...
>>>>>> https://www.listbox.com/member/archive/303/
>>>>>>
>>>>>> On 22.09.2015 02:25, EdFromNH . wrote:
>>>>>>
>>>>>>> [[[[[P.S. I am resending this because this intended content didn't
>>>>>>> arrive until the 4th entry in the prior thread in which I tried to 
>>>>>>> discuss
>>>>>>> this.]]]]]
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> -------------------------------------------
>>>>>> AGI
>>>>>> Archives: https://www.listbox.com/member/archive/303/=now
>>>>>> RSS Feed:
>>>>>> https://www.listbox.com/member/archive/rss/303/8630185-a57a74e1
>>>>>> Modify Your Subscription: https://www.listbox.com/member/?&;
>>>>>> Powered by Listbox: http://www.listbox.com
>>>>>>
>>>>>
>>>>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>>>>> <https://www.listbox.com/member/archive/rss/303/24379807-653794b5> |
>>>>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>>>>> <http://www.listbox.com>
>>>>>
>>>>
>>>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>>>> <https://www.listbox.com/member/archive/rss/303/8630185-a57a74e1> |
>>>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>>>> <http://www.listbox.com>
>>>>
>>>
>>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>>> <https://www.listbox.com/member/archive/rss/303/24379807-653794b5> |
>>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>>> <http://www.listbox.com>
>>>
>>
>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/10443978-6f4c28ac> |
>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>> <http://www.listbox.com>
>>
>
>
>
> --
> Full employment can be had with the stoke of a pen. Simply institute a six
> hour workday. That will easily create enough new jobs to bring back full
> employment.
>
> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/24379807-653794b5> |
> Modify
> <https://www.listbox.com/member/?&;>
> Your Subscription <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to