I guess I should have not said that I totally agree with Steve's comment.
When he said dimensions I was thinking more of types such as abstract types
or something that is effectively similar. Suppose there was a non-standard,
innovative mathematics that was able to effectively deal with data of
different abstract types. Then it would be capable of calculating with
different abstractions some of which might be said to play roles similar to
dimensions in standard contemporary mathematics of measurable objects.

The method that neurons form generative 'connections' is irrelevant to the
capability of neural activity to transmit data. The brain must be able to
form reactions and to make choices like inhibiting or activating different
kinds of reactions to some event. This means that the brain must be
reacting across some distances. It could be done by long synapses, I have
no way of knowing.

If natural neural networks are able to implement logical or symbolic
functions then they certainly have the potential to transmit richer data
that is able to encode a great many variations of data objects. So,
regardless of the details of how firing 'connections' are formed, the model
of thought that most of us feel is in the neighborhood of the ballpark if
not in the dugout is some sort of variation of the computational model of
mind. The idea that Hebbian theory might be used to proscribe a severe
limitation on the range of neural symbolic processing is not supported by
our experiences.
Jim Bromer


On Thu, Jun 20, 2019 at 7:33 PM Matt Mahoney <mattmahone...@gmail.com>
wrote:

> I disagree. By what mechanism would neurons representing feet and meters
> connect, but not kilograms and liters?
>
> Neurons form connections by Hebb's rule. Neurons representing words form
> connections when they appear close together or in the same context.
>
> On Thu, Jun 20, 2019, 4:14 PM Jim Bromer <jimbro...@gmail.com> wrote:
>
>> Steve said: I strongly suspect biological synapses are tagged in some way
>> to only connect with other synapses carrying dimensionally compatible
>> information.
>>
>> I totally agree. So one thing that I am wondering about is whether that
>> can be computed using a novel kind of mathematics? Intuitively, I would say
>> absolutely.
>>
>> A truly innovative AI mathematical system would not 'solve' every AI
>> problem but could it be developed so that it helped speed up and direct an
>> initial analysis of input? Intuitively I am pretty sure it can be done, but
>> I am not at all sure that I could come up with a method.
>> Jim Bromer
>>
>>
>> On Thu, Jun 20, 2019 at 1:13 PM Steve Richfield <
>> steve.richfi...@gmail.com> wrote:
>>
>>> Jim,
>>>
>>> Many systems, e.g. while adding probabilities to compute probabilities
>>> doesn't make sense; adding counts having poor significance, which can look
>>> a lot like adding probabilities, can make sense to produce a count.
>>>
>>> Where this gets confusing is in sensory fusion. Present practice is
>>> usually some sort of weighted summation, when CAREFUL analysis would
>>> probably involve various nonlinearities to convert inputs to cannonical
>>> form that make sense to add, followed by another nonlinearity to convert
>>> the sum to suitable output units.
>>>
>>> I strongly suspect biological synapses are tagged in some way to only
>>> connect with other synapses carrying dimensionally compatible information.
>>>
>>> Everyone seems to focus on values being computed, when it appears that
>>> it is the dimensionality that restricts learning to potentially rational
>>> processes.
>>>
>>> Steve
>>>
>>> On Thu, Jun 20, 2019, 9:14 AM Jim Bromer <jimbro...@gmail.com> wrote:
>>>
>>>> I originally thought about novel computational rules. Arithmetic is not
>>>> reversible because a computational result is not unique for the input
>>>> operands. That makes it a type of compression. Furthermore it uses a
>>>> limited set of rules. That makes it a super compression method.
>>>>
>>>> On Thu, Jun 20, 2019, 12:08 PM Jim Bromer <jimbro...@gmail.com> wrote:
>>>>
>>>>> I guess I understand what you mean.
>>>>>
>>>>> On Thu, Jun 20, 2019, 12:07 PM Jim Bromer <jimbro...@gmail.com> wrote:
>>>>>
>>>>>> I think your use of metaphors, especially metaphors that were
>>>>>> intended to emphasize your thoughts through exaggeration, may have 
>>>>>> confused
>>>>>> me. Would you explain your last post Steve?
>>>>>>
>>>>>> On Thu, Jun 20, 2019, 12:02 PM Steve Richfield <
>>>>>> steve.richfi...@gmail.com> wrote:
>>>>>>
>>>>>>> Too much responding without sufficient thought. After a week of
>>>>>>> thought regarding earlier postings on this thread...
>>>>>>>
>>>>>>> Genuine computation involves manipulating numerically expressible
>>>>>>> value (e.g. 0.62), dimensionality (e.g. probability), and significance
>>>>>>> (e.g. +/- 0.1). Outputs of biological neurons appear to fit this model.
>>>>>>>
>>>>>>> HOWEVER, much of AI does NOT fit this model - yet still appears to
>>>>>>> "work". If this is useful than use it, but there usually is no path to
>>>>>>> better solutions. You can't directly understand, optimize, adapt, debug,
>>>>>>> etc., because it is difficult/impossible to wrap your brain around
>>>>>>> quantities representing nothing.
>>>>>>>
>>>>>>> Manipulations that don't fit this model are numerology, not
>>>>>>> mathematics, akin to bring astrology instead of astronomy.
>>>>>>>
>>>>>>> It seems perfectly obvious to me that AGI, when it comes into being,
>>>>>>> will involve NO numerological faux "computation".
>>>>>>>
>>>>>>> Sure, learning could involve developing entirely new computation,
>>>>>>> but it would have to perform potentially valid computations on it's 
>>>>>>> inputs.
>>>>>>> For example, adding probabilities is NOT valid, but ORing them could be
>>>>>>> valid.
>>>>>>>
>>>>>>> Steve
>>>>>>>
>>>>>>> On Thu, Jun 20, 2019, 8:22 AM Alan Grimes via AGI <
>>>>>>> agi@agi.topicbox.com> wrote:
>>>>>>>
>>>>>>>> It has the basic structure and organization of a conscious agent,
>>>>>>>> obviously it lacks the other ingredients required to produce a
>>>>>>>> complete
>>>>>>>> mind.
>>>>>>>>
>>>>>>>> Stefan Reich via AGI wrote:
>>>>>>>> > Prednet develops consciousness?
>>>>>>>> >
>>>>>>>> > On Wed, Jun 19, 2019, 06:51 Alan Grimes via AGI <
>>>>>>>> agi@agi.topicbox.com
>>>>>>>> > <mailto:agi@agi.topicbox.com>> wrote:
>>>>>>>> >
>>>>>>>> >     Yay, it seems peeps are finally ready to talk about this!! =P
>>>>>>>> >
>>>>>>>> >
>>>>>>>> >     Lets see if I can fool anyone into thinking I'm actually
>>>>>>>> making
>>>>>>>> >     sense by
>>>>>>>> >     starting with a first principles approach... Permalink
>>>>>>>> >     <
>>>>>>>> https://agi.topicbox.com/groups/agi/T395236743964cb4b-M686d9fcf7662ad8dc2fc1130
>>>>>>>> >
>>>>>>>> >
>>>>>>>> >
>>>>>>>> 
>>>>>>>> 
>>>>>>>> --
>>>>>>>> Please report bounces from this address to a...@numentics.com
>>>>>>>> 
>>>>>>>> Powers are not rights.
>>>>>>>> 
>>>>>>> *Artificial General Intelligence List
> <https://agi.topicbox.com/latest>* / AGI / see discussions
> <https://agi.topicbox.com/groups/agi> + participants
> <https://agi.topicbox.com/groups/agi/members> + delivery options
> <https://agi.topicbox.com/groups/agi/subscription> Permalink
> <https://agi.topicbox.com/groups/agi/T395236743964cb4b-M9defff2ab5e8b39a818f88fa>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T395236743964cb4b-M13cf5c5efabc87221911c667
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to