[Note: As we move through the "omnicrisis", of which the Robot apocalypse is only a part, I feel obliged to give my friends fair warning about immanent and severe events that are showing up on my radar (accurate or not). I do not intend to abuse the list or its intended purpose in any way. ]

I want to start out with an Indianna Jones analogy that has been running through my head for the last week but didn't seem to have a point that deserved its own post.

Immagine it's 1931 and you are the esteemed Dr. Jones and you've been working a site for the last six months. Your business model is to document a chamber, make drawings, photographs, and reports, and crate them off with whatever artifacts you found and send them to your sponsor. The sponsor will sell the artifacts to the museums and give you a cut so that you can continue the expedition. So far, you have excavatated a bunch of small chambers typical of ancient structures. Now, all of a sudden you have found a tunnel into a massive chamber, so large that your little oil lamp can't light the back wall. The echo sounds like there are upper and lower levels too. You aren't sure that it's the actual treasure room but there is definitely gold near your feet.

That's basically the state of AI at this point. We've discovered **something** but we can't even measure it at this point. Super exciting! It feels like there SHOULD Be a ceiling to the GPT paradigm but we can't really see it yet. The next step will have to be an improvment to the algorithms but there is so much work to do before we can really get started on that..

[[Jeez, I already have another post in my mind that wants to be written but I'll have to finish this post first....]]

Anyway, in order that it may properly be considered, lets state a "ceiling hypothesis" WRT to AGI. The AGI Ceiling hypothesis is that AGI is fundamentally solvable like chemistry was and that after solving it, AGI will become a purely engineering discipline. After the ceiling is reached, it may still be possible to improve algorithms or increase scale, or tweak the computational substrate, or select a substrate for a specific environment, but the overall framework of an AGI will be known and that no future AGI can be made that cannot be described by that framework.

I am attracted to this notion because it would be super convenient to be able to design a new brain for yourself and never have to revisit your basic architecture again. Obviously the universe is under no obligation to cater to what you find convenient so we need to actually construct a foundation for this hypothesis.

What about the other hypotheses?

Ok, What would a sans-ceiling hypothesis look like? It would have to propose that for any AGI framework there exists a more sophisticated (not larger or more elaborate) but a whole complexity class more sophisticated framework and that you can never run out of such frameworks otherwise you have discovered the ceiling.

This later hypothesis seems alltogether less plausible.

One of the laws of science that I consider to be fundamental, is that the null hypothesis must ALWAYS be on the table. The general null hypothesis is that the phenomenon in question does not actually exist and is ether an illusion or much easier explained by other well understood phenomena.

In this case, the null hypothesis would state that there is no such thing as intelligence that admits a formalized grand unified theory. The phenomenon of a psyche that appears to exist in humans is just a bag-o-hacks that happens to be found in the brain and that it cannot be formalized above the microscopic scale. The existence of any of these hacks in the brain can be attributed to nothing byeond that it happened to be useful evolutionarily and nothing else.

This last theory is a bit vexing because it means that brainbuilding will always be a dark art, and never a proper engineering discipline, or rather the process of engineering a brain isn't something you can just compute out on a piece of paper but rather a highly emperical process where you can only hope to pass a given battery of tests, but not really prove anything about the mind you are creating.


--
Beware of Zombies. =O
#EggCrisis  #BlackWinter
White is the new Kulak.
Powers are not rights.


------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tf770cc489b0d802c-M668315a990648b3e1bd6ab42
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to