Matt,

> Watson's knowledge base is 4 TB of text. That is 4000 times larger
> than an average person would hear and read in a lifetime. That, and a
> fast reaction time on the buzzer, compensates for it other weaknesses
> such as lack of vision and embodiment.
>
> Reasoning about space and time are fairly simple, but these are only 2
> of over 100 modules, each of which can answer only a few percent of
> the questions. The intelligence comes from putting all of these
> together.
>
> What kind of test would be appropriate for comparing Watson with OpenCog?

At this moment, Watson is certainly a  more impressively demonstrable
system than OpenCog.  It also is the result of massively more
man-years (and even massively more dollars) of effort, of course...

If OpenCog is successful it will lead (at some time well before it
hypothetically leads to a human-level AGI) to an English dialogue
system that is able to flexibly, common-sensically converse about what
a virtual or robotic agent is experiencing...

As I am not the one obsessed with quantitative metrics, boiling that
down into a formal test is not really my problem...

I would imagine that if one formulated a highly precise test for
"flexible, common-sensical conversation about the experiences of a
virtual or robotic agent", then some Watson-like approach might well
work for passing that test ------ even though this approach would not
be effectively generalizable to human-level AGI.....  But if we got to
this level with an OpenCog system, I believe we would be well on the
path to human-level AGI

-- Ben G


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to