So nothing, really.

I visited Israel and Palestine last June, before the latest battle in this
century long war. One side has genetically high IQ, the other has high
fertility. It will be a long time before this conflict ends.

American 19th century history might give us a clue. The losers were left in
poverty with a tiny fraction of the least desirable land, with the option
to adopt the language and culture of their conquerors as the only way out.
It is the same story in all the old European colonies: Africa, India, Latin
America, and the Caribbean where I happen to be this week.

But that was before women's equality and birth control. Now we have
technology to give us everything we want. Apparently we want to go extinct.
If you want to see what the world will look like in 50 years, look at the
fertility rate by country. In the US, the fastest growing population is the
Amish.

I was contacted a few days ago by a Reuters journalist researching AI
safety. I described how opinions range from everything is fine (LeCun) to
we are doomed (Yudkowsky). I gave my opinion that we are focusing on the
wrong risks. A fast takeoff singularity won't happen because intelligence
is not a point on a line. And gray goo is over a century away at the rate
of Moore's law, if it happens at all. It is true that we can't control an
agent with higher intelligence, but a few billionaires still can.

The real risk of AI is social isolation by getting everything you want, or
actually, everything you think you want. Past despots ruled by fear and
torture, but animal trainers know that positive reinforcement is more
effective.

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta9d6271053bbd3f3-M27f80fdd4a92e011faa67c52
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to