Steve, I finally found what you meant by "dimensionality" and I see that
your thought on that might be stretched to my thinking about categorical
abstractions or "types".

You said, "Genuine computation involves manipulating numerically
expressible value (e.g. 0.62), dimensionality (e.g. probability), and
significance (e.g. +/- 0.1). Outputs of biological neurons appear to fit
this model."

The only difference I have is that it may be a real stretch to refer to
'abstractions' as 'dimensionalities' of a value since a type or an
abstraction has to refer to a lot more than a value. Now that I think about
this, I realize that I have to somehow include this in my thinking of a
conceptual mathematics - even if the mathematics are only used as
estimating systems. And this thought might be useful in an exercise to
attempt to create a weak mathematical system that include that realization.
So to put this back into perspective, a system of probabilites (for
example) might be used in a mathematical system that generates searches for
possibilities. Since computers can use really big numbers so much more
easily than we can, maybe this kind of exploration makes sense even if it
is not how neurons work.
Jim Bromer


On Thu, Jun 20, 2019 at 12:02 PM Steve Richfield <steve.richfi...@gmail.com>
wrote:

> Too much responding without sufficient thought. After a week of thought
> regarding earlier postings on this thread...
>
> Genuine computation involves manipulating numerically expressible value
> (e.g. 0.62), dimensionality (e.g. probability), and significance (e.g. +/-
> 0.1). Outputs of biological neurons appear to fit this model.
>
> HOWEVER, much of AI does NOT fit this model - yet still appears to "work".
> If this is useful than use it, but there usually is no path to better
> solutions. You can't directly understand, optimize, adapt, debug, etc.,
> because it is difficult/impossible to wrap your brain around quantities
> representing nothing.
>
> Manipulations that don't fit this model are numerology, not mathematics,
> akin to bring astrology instead of astronomy.
>
> It seems perfectly obvious to me that AGI, when it comes into being, will
> involve NO numerological faux "computation".
>
> Sure, learning could involve developing entirely new computation, but it
> would have to perform potentially valid computations on it's inputs. For
> example, adding probabilities is NOT valid, but ORing them could be valid.
>
> Steve
>
> On Thu, Jun 20, 2019, 8:22 AM Alan Grimes via AGI <agi@agi.topicbox.com>
> wrote:
>
>> It has the basic structure and organization of a conscious agent,
>> obviously it lacks the other ingredients required to produce a complete
>> mind.
>>
>> Stefan Reich via AGI wrote:
>> > Prednet develops consciousness?
>> >
>> > On Wed, Jun 19, 2019, 06:51 Alan Grimes via AGI <agi@agi.topicbox.com
>> > <mailto:agi@agi.topicbox.com>> wrote:
>> >
>> >     Yay, it seems peeps are finally ready to talk about this!! =P
>> >
>> >
>> >     Lets see if I can fool anyone into thinking I'm actually making
>> >     sense by
>> >     starting with a first principles approach... Permalink
>> >     <
>> https://agi.topicbox.com/groups/agi/T395236743964cb4b-M686d9fcf7662ad8dc2fc1130
>> >
>> >
>> >
>> 
>> 
>> --
>> Please report bounces from this address to a...@numentics.com
>> 
>> Powers are not rights.
>> 
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> + delivery
> options <https://agi.topicbox.com/groups/agi/subscription> Permalink
> <https://agi.topicbox.com/groups/agi/T395236743964cb4b-M3683e7beda9dccf33144d7fc>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T395236743964cb4b-M48fdc303790ee839988c5852
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to