om

Having no funds to build the robotics lab I was planning. ($40k already budgeted, ~$10k saved). And having no VR... (you would imagine that uploaders would jump at the chance to build the 'reality' they want to spend eternity in...)

Because making progress is becoming ever more urgent, I'm forced to attempt to move forward on a purely theoretical basis. This is where most other people on this list go and metaphorically hang themselves. Their theories become too ungrounded and they start writing the type of crap I immediately delete. (I'm looking at you, Jim.)

Knowledge representation (KR) is it's own sub-field because it is both an obvious and major problem. The trend in the field, indeed one that seems to be supported by psychological evidence, is the use of linear models. Linear models are popular because the mathematics is fairly well developed and because immensely powerful linear solvers (ie Nvidia Tesla processors) are readily available.

To understand where I'm going with this, I need to review my underlying theory in a nutshell. Basically the brain is CONSTANTLY imagining. When you are awake it uses feedback from the environment to correct what the imaginative process is producing -- this is called perception.. When the imagined environment basically agrees with the signals from the sensory organs then the top level description that generated the imagined environment must also bear a close resemblance to a top level description of the environment. This top level description, in turn, could be sufficient to support any of the reasonable classic AI approaches.

Implicitly, this means that the AI system must be capable of generating photo-realistic images, high fidelity audio, etc... It also means that the limit to what it can understand is the limit to which it can model (imagine). It also needs the capacity to understand social interactions and subjects that humans can only approach mathematically. The limitations of the human cognition are well documented.

So the first thing I need beyond a linear model is a model that can scale dynamically to meet needs. This fixed-length vector bullshit is just not going to cut it. =\ Second, It's going to need to be able to re-structure itself as it's environment and demands change. So not only will the network be dynamic in size, it will also be dynamic in structure. A space-faring AI will probably need a module for orbital mechanics, etc... None of this shit can be pre-programmed. It must be learned. I would propose that the AI start out with the most minimal pre-established kernel and develop all the higher functional areas dynamically.

Finally, I want to get past linear approximations of the non-linearities of the world. The Big Weenie makes a convincing argument in his book, "How to Create a Mind", that the brain uses a system of redundant representations to solve certain problems. For example you might have many neural representations of some person or object, each handling a different angle of orientation, etc. But this is obviously inferior to a truly 3D model.

Yeah, I'm being ambitious here but because I'm NOT AN UPLOADER I am able to think past the limitations of the brain and contemplate the ultimate AGI (then build it).

Once I finally figure this shit out, the code for this motherfucker is probably going to be on the order of Forth. -- really fucking tiny. At the size that I envision, it will be possible to prove it mathematically and at that point basically have a technology that I can throw at any AGI problem and never have to re-visit. =P

Rabbi Yudkowsky likes to masturbate about ideas of recursive self-improvement as if it were an un-bounded process. My model is that there will be a few iterations of the software until it is mathematically proven, at which point it is basically perfect. The hardware can also evolve through a few iterations until it is the ideal architecture for running the software. And the scale of the system can be built out until most nodes are simply idle most of the time for lack of things to think about. Bottom line is that the rabbi's thought patterns with regards to expanding intelligence so closely mirror my thinking about expanding boobs that the parallels can't be discounted. (nobody's going to read this far into anything I write so I can basically say anything at this point in the text.)

oops, some people start reading at the bottom to see if the author ever got to his point and then read backwards to see how the point was developed... better pad out the bottom of the message now...

So I do have a partial outline of the system I need but I still have a few math problems to solve until I am confident that I have a system that can go the distance here.

--
NOTICE: NEW E-MAIL ADDRESS, SEE ABOVE

Powers are not rights.



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to