An objection related to AGI and Singularity -
the concept of Singularity that raises objection for me is "over-estimating" the prediction of strength in Artificial Intelligence. A corollary might be the under-estimation of the human brain.
"Intelligence is not linear" may be the same idea, I've not seen the full argument. The idea that I propose is that there is a limitation of intelligence. Essentially this boundary is value related.
Whatever implementation is first used for AGI, it will suffer the same judgment problems common to humans. Values float, and decisions run into values very quickly in the "real world".
The long winded explanation is found here: http://www.footnotestrongai.com You tell me if it is the same as "intelligence is not linear". Stan Nilsen Kaj Sotala wrote:
For the recent week, I have together with Tom McCabe been collecting all sorts of objections that have been raised against the concepts of AGI, the Singularity, Friendliness, and anything else relating to SIAI's work. We've managed to get a bunch of them together, so it seemed like the next stage would be to publicly ask people for any objections we may have missed. The objections we've gathered so far are listed below. If you know of any objection related to these topics that you've seriously considered, or have heard people bring up, please mention it if it's not in this list, no matter how silly it might seem to you now. (If you're not sure of whether the objection falls under the ones already covered, send it anyway, just to be sure.) You can send your objections to the list or to me directly. Thank you in advance for everybody who replies. AI & The Singularity -------------------------- * We are nowhere near building an AI. * AI has supposedly been around the corner for 20 years now. * Computation isn't a sufficient prerequisite for consciousness. * Computers can only do what they're programmed to do. * There's no reason for anybody to want to build a superhuman AI. * The human brain is not digital but analog: therefore ordinary computers cannot simulate it. * You can't build a superintelligent machine when we can't even define what intelligence means. * Intelligence isn't everything: bacteria and insects are more numerous than humans. * There are limits to everything. You can't get infinite growth. * Extrapolation of graphs doesn't prove anything. It doesn't show that we'll have AI in the future. * Intelligence is not linear. * There is no such thing as a human-equivalent AI. * Intelligence isn't everything. An AI still wouldn't have the resources of humanity. * Machines will never be placed in positions of power. * A computer can never really understand the world the way humans can. * Godel's Theorem shows that no computer, or mathematical system, can match human reasoning. * It's impossible to make something more intelligent/complex than yourself. * AI is just something out of a sci-fi movie, it has never actually existed. * Creating an AI, even if it's possible in theory, is far too complex for human programmers. * Human consciousness requires quantum computing, and so no conventional computer could match the human brain. * A Singularity through uploading/BCI would be more feasible/desirable. * True, conscious AI is against the will of God/Yahweh/Jehovah, etc. * AI is too long-term a project, we should focus on short-term goals like curing cancer. * The government would never let private citizens build an AGI, out of fear/security concerns. * The government/Google/etc. will start their own project and beat us to AI anyway. * A brain isn't enough for an intelligent mind - you also need a body/emotions/society. Friendliness ------------------ * Ethics are subjective, not objective: therefore no truly Friendly AI can be built. * An AI forced to be friendly couldn't evolve and grow. * Shane Legg proved that we can't predict the behavior of intelligences smarter than us. * A superintelligence could rewrite itself to remove human tampering. Therefore we cannot build Friendly AI. * A super-intelligent AI would have no reason to care about us. * The idea of a hostile AI is anthropomorphic. * It's too early to start thinking about Friendly AI. * Development towards AI will be gradual. Methods will pop up to deal with it. * "Friendliness" is too vaguely defined. * What if the AI misinterprets its goals? * Couldn't AIs be built as pure advisors, so they wouldn't do anything themselves? * A post-Singularity mankind won't be anything like the humanity we know, regardless of whether it's a positive or negative Singularity - therefore it's irrelevant whether we get a positive or negative Singularity. * It's unethical to build AIs as willing slaves. * You can't suffer if you're dead, therefore AIs wiping out humanity isn't a bad thing. * Humanity should be in charge of its own destiny, not machines. * Humans wouldn't accept being ruled by machines. * You can't simulate a person's development without creating a copy of that person. * It's impossible to know a person's subjective desires and feelings from outside. * A machine could never understand human morality/emotions. * An AI would just end up being a tool of whichever group built it/controls it. * AIs would take advantage of their power and create a dictatorship. * Creating a UFAI would be disastrous, so any work on AI is too risky. * A human upload would naturally be more Friendly than any AI. * A perfectly Friendly AI would do everything for us, making life boring and not worth living. * An AI without self-preservation built in would find no reason to continue existing. * A superintelligent AI would reason that it's best for humanity to destroy itself. * The main defining characteristic of complex systems, such as minds, is that no mathematical verification of properties such as "Friendliness" is possible.
----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=4007604&id_secret=80019855-5b8411
