http://www.fastcompany.com/3005313/evolved-brains-robots-creep-closer-animal-learning
“Imagine how hard it would be to build and modify a non-modular car,” > states Clune. “If the machinery of the locks were entangled with the > functionality of everything else, when you improve how the locks work, you > might break the transmission, muffler, and steering wheel. That sort of > non-modular design is too entangled to work with. This puts me in mind of Steven Pinker's "mental modules" or "mental organs", as he describes in *How the Mind Works*. ( http://www.amazon.com/How-Mind-Works-Steven-Pinker/dp/1469228424) If the mind really is made of modules and not just a blank undifferentiated substrate -- an idea supported by the consistent localization of particular processes to the same areas of the brain for different individuals of the same species -- what modules are appropriate for an AGI design, and how do they connect to each other? Are there modules that are only necessary for human minds, not AGIs? Are there modules we could add to the list as enhancements? How can those modules be made to work together to create globally intelligent behavior? When I consider the modules vs. generic substrate question, several other questions come to mind: Why would nature try to use a one-size-fits-all design, when, based on evidence from past experience with computational algorithms, there are clear advantages to tailoring an algorithm to the task at hand? And regardless of whether nature uses a generic design or has special modules for different classes of behavior, why would we choose to use an undifferentiated design in an AGI, given the known advantages of the other approach? Why would we design a car where the transmission gets broken when we improve the locks? Likewise, why would we design an AGI which can't get an update to its conversational protocols (or the learning algorithm used to pick them up automatically) without screwing up its vision capabilities? Learning and reasoning are of core importance to intelligence, but I have a feeling this is in the same sense that muscles are of core importance to movement. The body has hundreds of different voluntary muscles, each of nearly identical structure, and these muscles attach to the skeletal framework to do different jobs using the same underlying mechanisms. Machine Learning and AI given us hundreds of variations on algorithmic "muscles" for learning and reasoning. Maybe the next step isn't to come up with a better, more general purpose variation on one of these algorithms. Maybe the next step is building the framework to which the existing ones connect in such a way as to produce maximum leverage in the direction needed -- a speech understanding engine composed of a collection of algorithms individually tailored for recognizing speech and generating streaming inbound speech events, grouping speech events into higher-level structures, and mapping those higher-level structures onto the space of possible meanings; a vision interpretation engine similarly composed of smaller, targeted algorithms; a body control engine, a high-level reasoning engine, etc. ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com