On Wed, Apr 10, 2013 at 6:29 PM, Tim Tyler <[email protected]> wrote:
>
> Practically any form of software development involves lots of testing.
>
> However, looking at:
>
> http://multiverseaccordingtoben.blogspot.com/2011/06/why-is-evaluating-partial-progress.html
>
> ...certainly suggests that Ben has some rather odd ideas about testing.

It's called "wishful thinking". He proposes "cognitive synergy" as an
excuse for not testing. When all of the components are put together,
it will magically work. It's just intuition, of course, not backed by
any evidence. In fact, all of the evidence points the other way. The
most powerful models in machine learning are ensemble models. You
combine lots of predictors and get more accurate predictions. If you
remove half of them, then you still get most of the accuracy. Each
model can be tested independently of the others, because that's how
they work in practice. Some examples:

- Watson is made up of hundreds of independent modules. Each is able
to answer a small subset of Jeopardy questions.
- The PAQ compressor is made up of hundreds of independent bit predictors.
- People partially recover from strokes because other parts of the
brain compensate for the parts that are destroyed.
- Stephen Hawking and Helen Keller are missing some key components but
are still considered intelligent.

Early progress in AI was rapid, but then stalled after we solved all
the easy parts. You can't blame this on cognitive synergy. If it were
true, then progress would have been slow at first, then picked up
speed as we got closer to the finish, the opposite of what we
observed.

It looks to me like OpenCog is going the way of NARS. You may recall
how how Pei Wang spent over a decade developing a general data
structure for knowledge representation and a mathematical model of
learning and reasoning. It has many of the same elements as AtomSpace:
truth values, confidences, is-a links, logical operations, etc. But it
ended up going nowhere. He never did any of the hard work of
collecting training data and testing it on real-world problems like
text prediction or image labeling or robot navigation. He never
estimated how much data he would need, or how much computing power to
process it.

As you read this, your brain is computing 10^15 weighted sums per
second on 10^14 weights, and then adjusting them according to a
complex algorithm that depends on 3 x 10^9 DNA bases, equivalent to
300 million lines of code written by a 3 billion year long search
algorithm running on a planet sized molecular computer. Maybe there is
a way to do this on your PC, but I really doubt it.

I am not saying this because I want to see OpenCog fail. I would
rather see research and get answers to hard questions. There is a lot
we still don't know.

--
-- Matt Mahoney, [email protected]


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to