Kaj and Tom,

Great idea!

Here's an objection to the few current Friendly AGI and related efforts.

"What you're saying about your  project seems makes sense, though I don't
quite understand it. But even though an ineffectual bunch of dreamy nerds
may be good for tinkering with gadgets, they're no good at getting a major
funded engineering initiative underway and finished, so there's no use
caring about your project too much."

(No, I don't personally agree with the above statement.)

I do think many people (even  fairly intelligent ones) think this way when
exposed to the concept: I wonder if people might have thought this way about
Goddard as a pioneering space-flight rocketeer in the 1920's.

Joshua



2007/12/27, Kaj Sotala <[EMAIL PROTECTED]>:
>
> For the recent week, I have together with Tom McCabe been collecting
> all sorts of objections that have been raised against the concepts of
> AGI, the Singularity, Friendliness, and anything else relating to
> SIAI's work. We've managed to get a bunch of them together, so it
> seemed like the next stage would be to publicly ask people for any
> objections we may have missed.
>
> The objections we've gathered so far are listed below. If you know of
> any objection related to these topics that you've seriously
> considered, or have heard people bring up, please mention it if it's
> not in this list, no matter how silly it might seem to you now. (If
> you're not sure of whether the objection falls under the ones already
> covered, send it anyway, just to be sure.) You can send your
> objections to the list or to me directly. Thank you in advance for
> everybody who replies.
>
> AI & The Singularity
> --------------------------
>
> * We are nowhere near building an AI.
> * AI has supposedly been around the corner for 20 years now.
> * Computation isn't a sufficient prerequisite for consciousness.
> * Computers can only do what they're programmed to do.
> * There's no reason for anybody to want to build a superhuman AI.
> * The human brain is not digital but analog: therefore ordinary
> computers cannot simulate it.
> * You can't build a superintelligent machine when we can't even define
> what intelligence means.
> * Intelligence isn't everything: bacteria and insects are more
> numerous than humans.
> * There are limits to everything. You can't get infinite growth.
> * Extrapolation of graphs doesn't prove anything. It doesn't show that
> we'll have AI in the future.
> * Intelligence is not linear.
> * There is no such thing as a human-equivalent AI.
> * Intelligence isn't everything. An AI still wouldn't have the
> resources of humanity.
> * Machines will never be placed in positions of power.
> * A computer can never really understand the world the way humans can.
> * Godel's Theorem shows that no computer, or mathematical system, can
> match human reasoning.
> * It's impossible to make something more intelligent/complex than
> yourself.
> * AI is just something out of a sci-fi movie, it has never actually
> existed.
> * Creating an AI, even if it's possible in theory, is far too complex
> for human programmers.
> * Human consciousness requires quantum computing, and so no
> conventional computer could match the human brain.
> * A Singularity through uploading/BCI would be more feasible/desirable.
> * True, conscious AI is against the will of God/Yahweh/Jehovah, etc.
> * AI is too long-term a project, we should focus on short-term goals
> like curing cancer.
> * The government would never let private citizens build an AGI, out of
> fear/security concerns.
> * The government/Google/etc. will start their own project and beat us
> to AI anyway.
> * A brain isn't enough for an intelligent mind - you also need a
> body/emotions/society.
>
> Friendliness
> ------------------
>
> * Ethics are subjective, not objective: therefore no truly Friendly AI
> can be built.
> * An AI forced to be friendly couldn't evolve and grow.
> * Shane Legg proved that we can't predict the behavior of
> intelligences smarter than us.
> * A superintelligence could rewrite itself to remove human tampering.
> Therefore we cannot build Friendly AI.
> * A super-intelligent AI would have no reason to care about us.
> * The idea of a hostile AI is anthropomorphic.
> * It's too early to start thinking about Friendly AI.
> * Development towards AI will be gradual. Methods will pop up to deal with
> it.
> * "Friendliness" is too vaguely defined.
> * What if the AI misinterprets its goals?
> * Couldn't AIs be built as pure advisors, so they wouldn't do anything
> themselves?
> * A post-Singularity mankind won't be anything like the humanity we
> know, regardless of whether it's a positive or negative Singularity -
> therefore it's irrelevant whether we get a positive or negative
> Singularity.
> * It's unethical to build AIs as willing slaves.
> * You can't suffer if you're dead, therefore AIs wiping out humanity
> isn't a bad thing.
> * Humanity should be in charge of its own destiny, not machines.
> * Humans wouldn't accept being ruled by machines.
> * You can't simulate a person's development without creating a copy of
> that person.
> * It's impossible to know a person's subjective desires and feelings
> from outside.
> * A machine could never understand human morality/emotions.
> * An AI would just end up being a tool of whichever group built
> it/controls it.
> * AIs would take advantage of their power and create a dictatorship.
> * Creating a UFAI would be disastrous, so any work on AI is too risky.
> * A human upload would naturally be more Friendly than any AI.
> * A perfectly Friendly AI would do everything for us, making life
> boring and not worth living.
> * An AI without self-preservation built in would find no reason to
> continue existing.
> * A superintelligent AI would reason that it's best for humanity to
> destroy itself.
> * The main defining characteristic of complex systems, such as minds,
> is that no mathematical verification of properties such as
> "Friendliness" is possible.
>
>
>
> --
> http://www.saunalahti.fi/~tspro1/ | http://xuenay.livejournal.com/
>
> Organizations worth your time:
> http://www.singinst.org/ | http://www.crnano.org/ | http://lifeboat.com/
>
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=79608530-13ea93

Reply via email to