On Mon, Dec 10, 2012 at 7:35 PM, Ben Goertzel <[email protected]> wrote:
> Russell,
>
> The link Matt provided was not for the AGI-12 conference on AGI
> research, but rather for another smaller conference operated by
> Oxford's Future of Humanity Institute, AGI-Impacts, which is
> co-located with AGI-12 and deals with the potential future
> implications of AGI...
>
> If you look at the page for the actual AGI conference,
>
> http://agi-conference.org/2012/schedule/
>
> you will find a rather different set of papers ;p
>
> ben


Yes, you're right. I should have noticed that.

Your two papers didn't load for some reason (Chrome says "failed to
load PDF") but I read the abstracts and skimmed through the 36 other
papers. I counted 5 that had experimental results comparing different
techniques on real data, so I guess we are making slow progress toward
AGI. It is hard to tell because just about every paper in computer
science that I have ever seen shows an improvement of the author's
technique over somebody else's technique on some data set. It doesn't
matter what the technique is. In order for such information to be
useful, it has to generalize over many types of data, which generally
requires many papers by many different researchers to remove any bias.

I did find one paper in that category, which compared Kolmogorov
complexity with Solomonoff probability. The difference is that
Solomonoff probability combines many models M weighted by 2^-|M|,
while Kolmogorov complexity just takes the shortest M as an
approximation. The authors found that Solomonoff probability is more
accurate at predicting sunspot data. More data sets would help, but it
is still a useful result because it backs up a lot of independent
results supporting ensemble methods of prediction. At least I know
about such techniques because I use them for data compression.

The majority of papers offer design proposals or prove mathematical
theorems somehow related to intelligence. They all make strong
arguments for their case, but we really don't know if the methods are
useful or not. I realize that there are mathematical results that have
strongly influenced the direction of AI. Obviously Goedel, Turing,
Shannon, Solomonoff, Kolmogorov, and Levin are a major influence, as
well as more recent work by Hutter, Legg, and Schmidhuber. But
ultimately you need to do experiments. Theoretical models of AI as
reinforcement learners can tell us a lot about their limitations, but
the fact is we are not building AI that way. What we are actually
building is a huge, distributed collection of narrow AI working
together without any obvious collective goals at all.

--
-- Matt Mahoney, [email protected]


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to