It's striking how muddled and confused AI and, especially, the AGI field is!

The basic outline of an AGI black-box is:

{m, s'} := f(p, s)

Where m is the motor output, s is the state, and p is the perceptual stream. We make simplifying assumptions in that evolutionary has NOT found a way to cheat causality and that the systm is NOT quantum in that the algorithm is just good enough that one would suspect it of being quantum.

The early period of AI can be defined as the time during which people tried to replicate the output of f() without trying (or the ability to) look under the hood in any significant way. While many things came of this research, including relational databases, Lisp, Prolog, etc... as well as some decided failures such as expert systems, fuzzy logic, CYC, and others.... As well as ammusing little toys such as Eliza.

The commonality among these failures was that they could not penetrate into subconscious thought. Relational databases can answer any question that can be derived from the data but they cannot understand anything at all about what that data represents.


The current hype cycle of AI is starting to peek inside the black box a little and, remarkably enough, even implementing a rough approximation of the brain's systems produces fairly impressive results. But its still orders of complexity worse than the brain when it comes to learning.

It stands to reason that a less approximate brain inspiried architectures will less approximately exhibit intelligence. Even when you make limited assumptions about the problem the brain is trying to solve somewhere then implement an optimized algorithm to do it much better, you are still operating within the framework of the brain. While it still does have a fairly high ceiling in how much one can tweak it, it's still there.

Some examples of the limitations of the brain's architecture, include the inability to multiplex mental resources -> ie having a network of dozens of instances while retaining the advantages of having a single knowledge and skill pool. The lack of network features, etc...

So a deeper understanding of WTF we are doing is required and can't be directly reverse engineered.


Anyway, I'm not sure I had a complete thought to share with you... this post has been sitting in my editor for more than a week...

--
Clowns feed off of funny money;
Funny money comes from the FED
so NO FED -> NO CLOWNS!!!

Powers are not rights.


------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tf8343c16c309d228-M69c7e5f7c860d93af9ffe7a2
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to