I originally thought about novel computational rules. Arithmetic is not
reversible because a computational result is not unique for the input
operands. That makes it a type of compression. Furthermore it uses a
limited set of rules. That makes it a super compression method.

On Thu, Jun 20, 2019, 12:08 PM Jim Bromer <jimbro...@gmail.com> wrote:

> I guess I understand what you mean.
>
> On Thu, Jun 20, 2019, 12:07 PM Jim Bromer <jimbro...@gmail.com> wrote:
>
>> I think your use of metaphors, especially metaphors that were intended to
>> emphasize your thoughts through exaggeration, may have confused me. Would
>> you explain your last post Steve?
>>
>> On Thu, Jun 20, 2019, 12:02 PM Steve Richfield <steve.richfi...@gmail.com>
>> wrote:
>>
>>> Too much responding without sufficient thought. After a week of thought
>>> regarding earlier postings on this thread...
>>>
>>> Genuine computation involves manipulating numerically expressible value
>>> (e.g. 0.62), dimensionality (e.g. probability), and significance (e.g. +/-
>>> 0.1). Outputs of biological neurons appear to fit this model.
>>>
>>> HOWEVER, much of AI does NOT fit this model - yet still appears to
>>> "work". If this is useful than use it, but there usually is no path to
>>> better solutions. You can't directly understand, optimize, adapt, debug,
>>> etc., because it is difficult/impossible to wrap your brain around
>>> quantities representing nothing.
>>>
>>> Manipulations that don't fit this model are numerology, not mathematics,
>>> akin to bring astrology instead of astronomy.
>>>
>>> It seems perfectly obvious to me that AGI, when it comes into being,
>>> will involve NO numerological faux "computation".
>>>
>>> Sure, learning could involve developing entirely new computation, but it
>>> would have to perform potentially valid computations on it's inputs. For
>>> example, adding probabilities is NOT valid, but ORing them could be valid.
>>>
>>> Steve
>>>
>>> On Thu, Jun 20, 2019, 8:22 AM Alan Grimes via AGI <agi@agi.topicbox.com>
>>> wrote:
>>>
>>>> It has the basic structure and organization of a conscious agent,
>>>> obviously it lacks the other ingredients required to produce a complete
>>>> mind.
>>>>
>>>> Stefan Reich via AGI wrote:
>>>> > Prednet develops consciousness?
>>>> >
>>>> > On Wed, Jun 19, 2019, 06:51 Alan Grimes via AGI <agi@agi.topicbox.com
>>>> > <mailto:agi@agi.topicbox.com>> wrote:
>>>> >
>>>> >     Yay, it seems peeps are finally ready to talk about this!! =P
>>>> >
>>>> >
>>>> >     Lets see if I can fool anyone into thinking I'm actually making
>>>> >     sense by
>>>> >     starting with a first principles approach... Permalink
>>>> >     <
>>>> https://agi.topicbox.com/groups/agi/T395236743964cb4b-M686d9fcf7662ad8dc2fc1130
>>>> >
>>>> >
>>>> >
>>>> 
>>>> 
>>>> --
>>>> Please report bounces from this address to a...@numentics.com
>>>> 
>>>> Powers are not rights.
>>>> 
>>> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
>>> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
>>> participants <https://agi.topicbox.com/groups/agi/members> + delivery
>>> options <https://agi.topicbox.com/groups/agi/subscription> Permalink
>>> <https://agi.topicbox.com/groups/agi/T395236743964cb4b-M3683e7beda9dccf33144d7fc>
>>>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T395236743964cb4b-M576a973586d534a5910e3ee9
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to