In some areas of electronics there are times when you need to have a
positive voltage (or perhaps capacitance) in order to validate that a
signal is actually being transmitted. So in this case, where an (actual)
off state cannot be used (because it could be unreliable), it would be
possible to get 1 of 6 states -or even more - transmitted down a binary
line by sending it according to a 2 cycle clock. In the model that I was
talking about, if the signal is not sent in one of the two clock cycles
then it is clear that there is an interruption in the data. So the idea is
interesting although I do not see anyway to use it effectively in
programming.
Jim Bromer

On Sun, Jun 23, 2019 at 10:02 PM Jim Bromer <jimbro...@gmail.com> wrote:

> That idea did not work. You can send 1 of 6 states in 2 clock cycles with
> the method I was talking about but if you have a no-voltage state then you
> can send trinary digits and you can represent 1 of 9 states in 2 clock
> cycles.
> Jim Bromer
>
>
> On Sun, Jun 23, 2019 at 1:51 PM Jim Bromer <jimbro...@gmail.com> wrote:
>
>> I think the effective voltage compression in the voltage/timing binary
>> transmission model would approach 1/3 or 1/4. I cannot remember which one
>> offhand.
>> Jim Bromer
>>
>>
>> On Sun, Jun 23, 2019 at 1:38 PM Jim Bromer <jimbro...@gmail.com> wrote:
>>
>>> I should have said: The method that neurons form generative
>>> 'connections' is irrelevant to the capability of neural activity to
>>> transmit data. The brain must be able to form reactions and to make choices
>>> like inhibiting or activating different kinds of reactions to some event.
>>> This means that the brain must be reacting across some distances. It could
>>> be done by long axons, I have no way of knowing.
>>>
>>> According to Indiana.edu an axon can grow as long as 5 feet.
>>>
>>> Afer thinking about Matt's response to Steve's remark and Alan's
>>> response in the other thread I realized I probably misunderstood what Steve
>>> was saying. But I started thinking about what he said (what I now think he
>>> said). A number of years ago I discovered that if you sent two bits, one
>>> sent by timing or placement within a timing frame, you could get some
>>> compression by using the two forms of data representation. One by voltage
>>> (in my theory) and the other by timing. (There is a technical dilemma in
>>> using contemporary computer technology, because you are sort of using a
>>> trinary voltage state, no voltage, low voltage, and high voltage. But if
>>> the system was designed around an actual timing mechanism, and no voltage
>>> was just a default state and it meant nothing is being transmitted in that
>>> time frame whenever there is a no voltage reading on the receiver side then
>>> the quasi-trinary aspect is just a part of a technical specification.)  I
>>> cannot remember if the compression approached 1/2 or 1/3 or 3/4. So now I
>>> have two questions. If there were n dimensions to the data transmission
>>> could the voltage conservation compression approach an exponential rate on
>>> n? Or would it just be a geometric compression rate? There would be a cost
>>> to such a system so it would not be completely exponential or completely
>>> geometric regardless of the resultant compression rate. For instance in the
>>> voltage/timing mechanism the compression of the voltage signals sent would
>>> cost something in the time taken to send the data out. Oh yeah, I remember.
>>> The 1/3 or 3/4 ratios had something to do with the actual cost in voltage
>>> (of the data transmission) which is relevant in contemporary technology
>>> because of battery usage and heat build up.
>>> So if you had 3 physically very distinct binary dimensions to transmit
>>> data within a circuit, using voltage, timing, and routing, could you reduce
>>> representation to 1/8th? Even if the data had to be statically represented
>>> using all 3 dimensional bits could the circuit be nested with similar
>>> circuits and used for compressing computations? It is going to take me some
>>> time to figure this out.
>>>
>>> Jim Bromer
>>>
>>>
>>> On Thu, Jun 20, 2019 at 9:54 PM Jim Bromer <jimbro...@gmail.com> wrote:
>>>
>>>> I guess I should have not said that I totally agree with Steve's
>>>> comment. When he said dimensions I was thinking more of types such as
>>>> abstract types or something that is effectively similar. Suppose there was
>>>> a non-standard, innovative mathematics that was able to effectively deal
>>>> with data of different abstract types. Then it would be capable of
>>>> calculating with different abstractions some of which might be said to play
>>>> roles similar to dimensions in standard contemporary mathematics of
>>>> measurable objects.
>>>>
>>>> The method that neurons form generative 'connections' is irrelevant to
>>>> the capability of neural activity to transmit data. The brain must be able
>>>> to form reactions and to make choices like inhibiting or activating
>>>> different kinds of reactions to some event. This means that the brain must
>>>> be reacting across some distances. It could be done by long synapses, I
>>>> have no way of knowing.
>>>>
>>>> If natural neural networks are able to implement logical or symbolic
>>>> functions then they certainly have the potential to transmit richer data
>>>> that is able to encode a great many variations of data objects. So,
>>>> regardless of the details of how firing 'connections' are formed, the model
>>>> of thought that most of us feel is in the neighborhood of the ballpark if
>>>> not in the dugout is some sort of variation of the computational model of
>>>> mind. The idea that Hebbian theory might be used to proscribe a severe
>>>> limitation on the range of neural symbolic processing is not supported by
>>>> our experiences.
>>>> Jim Bromer
>>>>
>>>>
>>>> On Thu, Jun 20, 2019 at 7:33 PM Matt Mahoney <mattmahone...@gmail.com>
>>>> wrote:
>>>>
>>>>> I disagree. By what mechanism would neurons representing feet and
>>>>> meters connect, but not kilograms and liters?
>>>>>
>>>>> Neurons form connections by Hebb's rule. Neurons representing words
>>>>> form connections when they appear close together or in the same context.
>>>>>
>>>>> On Thu, Jun 20, 2019, 4:14 PM Jim Bromer <jimbro...@gmail.com> wrote:
>>>>>
>>>>>> Steve said: I strongly suspect biological synapses are tagged in some
>>>>>> way to only connect with other synapses carrying dimensionally compatible
>>>>>> information.
>>>>>>
>>>>>> I totally agree. So one thing that I am wondering about is whether
>>>>>> that can be computed using a novel kind of mathematics? Intuitively, I
>>>>>> would say absolutely.
>>>>>>
>>>>>> A truly innovative AI mathematical system would not 'solve' every AI
>>>>>> problem but could it be developed so that it helped speed up and direct 
>>>>>> an
>>>>>> initial analysis of input? Intuitively I am pretty sure it can be done, 
>>>>>> but
>>>>>> I am not at all sure that I could come up with a method.
>>>>>> Jim Bromer
>>>>>>
>>>>>>
>>>>>> On Thu, Jun 20, 2019 at 1:13 PM Steve Richfield <
>>>>>> steve.richfi...@gmail.com> wrote:
>>>>>>
>>>>>>> Jim,
>>>>>>>
>>>>>>> Many systems, e.g. while adding probabilities to compute
>>>>>>> probabilities doesn't make sense; adding counts having poor 
>>>>>>> significance,
>>>>>>> which can look a lot like adding probabilities, can make sense to 
>>>>>>> produce a
>>>>>>> count.
>>>>>>>
>>>>>>> Where this gets confusing is in sensory fusion. Present practice is
>>>>>>> usually some sort of weighted summation, when CAREFUL analysis would
>>>>>>> probably involve various nonlinearities to convert inputs to cannonical
>>>>>>> form that make sense to add, followed by another nonlinearity to convert
>>>>>>> the sum to suitable output units.
>>>>>>>
>>>>>>> I strongly suspect biological synapses are tagged in some way to
>>>>>>> only connect with other synapses carrying dimensionally compatible
>>>>>>> information.
>>>>>>>
>>>>>>> Everyone seems to focus on values being computed, when it appears
>>>>>>> that it is the dimensionality that restricts learning to potentially
>>>>>>> rational processes.
>>>>>>>
>>>>>>> Steve
>>>>>>>
>>>>>>> On Thu, Jun 20, 2019, 9:14 AM Jim Bromer <jimbro...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> I originally thought about novel computational rules. Arithmetic is
>>>>>>>> not reversible because a computational result is not unique for the 
>>>>>>>> input
>>>>>>>> operands. That makes it a type of compression. Furthermore it uses a
>>>>>>>> limited set of rules. That makes it a super compression method.
>>>>>>>>
>>>>>>>> On Thu, Jun 20, 2019, 12:08 PM Jim Bromer <jimbro...@gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> I guess I understand what you mean.
>>>>>>>>>
>>>>>>>>> On Thu, Jun 20, 2019, 12:07 PM Jim Bromer <jimbro...@gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> I think your use of metaphors, especially metaphors that were
>>>>>>>>>> intended to emphasize your thoughts through exaggeration, may have 
>>>>>>>>>> confused
>>>>>>>>>> me. Would you explain your last post Steve?
>>>>>>>>>>
>>>>>>>>>> On Thu, Jun 20, 2019, 12:02 PM Steve Richfield <
>>>>>>>>>> steve.richfi...@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> Too much responding without sufficient thought. After a week of
>>>>>>>>>>> thought regarding earlier postings on this thread...
>>>>>>>>>>>
>>>>>>>>>>> Genuine computation involves manipulating numerically
>>>>>>>>>>> expressible value (e.g. 0.62), dimensionality (e.g. probability), 
>>>>>>>>>>> and
>>>>>>>>>>> significance (e.g. +/- 0.1). Outputs of biological neurons appear 
>>>>>>>>>>> to fit
>>>>>>>>>>> this model.
>>>>>>>>>>>
>>>>>>>>>>> HOWEVER, much of AI does NOT fit this model - yet still appears
>>>>>>>>>>> to "work". If this is useful than use it, but there usually is no 
>>>>>>>>>>> path to
>>>>>>>>>>> better solutions. You can't directly understand, optimize, adapt, 
>>>>>>>>>>> debug,
>>>>>>>>>>> etc., because it is difficult/impossible to wrap your brain around
>>>>>>>>>>> quantities representing nothing.
>>>>>>>>>>>
>>>>>>>>>>> Manipulations that don't fit this model are numerology, not
>>>>>>>>>>> mathematics, akin to bring astrology instead of astronomy.
>>>>>>>>>>>
>>>>>>>>>>> It seems perfectly obvious to me that AGI, when it comes into
>>>>>>>>>>> being, will involve NO numerological faux "computation".
>>>>>>>>>>>
>>>>>>>>>>> Sure, learning could involve developing entirely new
>>>>>>>>>>> computation, but it would have to perform potentially valid 
>>>>>>>>>>> computations on
>>>>>>>>>>> it's inputs. For example, adding probabilities is NOT valid, but 
>>>>>>>>>>> ORing them
>>>>>>>>>>> could be valid.
>>>>>>>>>>>
>>>>>>>>>>> Steve
>>>>>>>>>>>
>>>>>>>>>>> On Thu, Jun 20, 2019, 8:22 AM Alan Grimes via AGI <
>>>>>>>>>>> agi@agi.topicbox.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> It has the basic structure and organization of a conscious
>>>>>>>>>>>> agent,
>>>>>>>>>>>> obviously it lacks the other ingredients required to produce a
>>>>>>>>>>>> complete
>>>>>>>>>>>> mind.
>>>>>>>>>>>>
>>>>>>>>>>>> Stefan Reich via AGI wrote:
>>>>>>>>>>>> > Prednet develops consciousness?
>>>>>>>>>>>> >
>>>>>>>>>>>> > On Wed, Jun 19, 2019, 06:51 Alan Grimes via AGI <
>>>>>>>>>>>> agi@agi.topicbox.com
>>>>>>>>>>>> > <mailto:agi@agi.topicbox.com>> wrote:
>>>>>>>>>>>> >
>>>>>>>>>>>> >     Yay, it seems peeps are finally ready to talk about
>>>>>>>>>>>> this!! =P
>>>>>>>>>>>> >
>>>>>>>>>>>> >
>>>>>>>>>>>> >     Lets see if I can fool anyone into thinking I'm actually
>>>>>>>>>>>> making
>>>>>>>>>>>> >     sense by
>>>>>>>>>>>> >     starting with a first principles approach... Permalink
>>>>>>>>>>>> >     <
>>>>>>>>>>>> https://agi.topicbox.com/groups/agi/T395236743964cb4b-M686d9fcf7662ad8dc2fc1130
>>>>>>>>>>>> >
>>>>>>>>>>>> >
>>>>>>>>>>>> >
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>> Please report bounces from this address to a...@numentics.com
>>>>>>>>>>>>
>>>>>>>>>>>> Powers are not rights.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> ------------------------------------------
>>>>>>>>>>>> Artificial General Intelligence List: AGI
>>>>>>>>>>>> Permalink:
>>>>>>>>>>>> https://agi.topicbox.com/groups/agi/T395236743964cb4b-Mdc530e65efee5618dc6de900
>>>>>>>>>>>> Delivery options:
>>>>>>>>>>>> https://agi.topicbox.com/groups/agi/subscription
>>>>>>>>>>>>
>>>>>>>>>>> *Artificial General Intelligence List
>>>>> <https://agi.topicbox.com/latest>* / AGI / see discussions
>>>>> <https://agi.topicbox.com/groups/agi> + participants
>>>>> <https://agi.topicbox.com/groups/agi/members> + delivery options
>>>>> <https://agi.topicbox.com/groups/agi/subscription> Permalink
>>>>> <https://agi.topicbox.com/groups/agi/T395236743964cb4b-M9defff2ab5e8b39a818f88fa>
>>>>>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T395236743964cb4b-M3e9311c4dd8075e9f2e50141
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to