On Sun, Dec 9, 2012 at 10:56 PM, Jim Bromer <[email protected]> wrote:
>
> You can find the papers here.
> http://www.mindmakers.org/projects/agiconf-2012/wiki/Schedule

I read the extended abstracts here:
http://www.winterintelligence.org/agi-impacts-extended-abstracts/

Summary: out of 12 papers, 0 report advances toward AGI.

The one paper that reports any experimental results whatsoever is the
one by Stuart Armstrong: Predicting AI… or failing to.
The main result (from reading the whole paper) is that the
distribution of predictions of time to AI has not changed since the
1950's, even though those predictions are known to be wrong.

Nine of the papers are on the general theme of safety, ethics, or
friendliness of self improving AI. There seems to be a general belief
that if we can produce a smarter than human AI, then so could it, only
faster. So we had better get its goal system right. But there is only
one problem. It is all of humanity, not a single human, that produced
this super-human AI. So the threshold for self improvement hasn't been
crossed yet. If you want an example of recursive self improvement,
then look at civilization making a better version of civilization, as
measured by economic growth and increased life expectancy. That is
happening in spite of the lack of any obvious goal system that needs
to be programmed.

To see why creating a single super-human AI is not recursive self
improvement, ask yourself if you could have done it 100 years ago,
knowing what you know now. Could you have done it if you were the only
living person on the planet? If not, then you had help.


-- Matt Mahoney, [email protected]


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to