Benjamin,

>> That proves my point [that AGI project can be successfully split
>> into smaller narrow AI subprojects], right?

> Yes, but it's a largely irrelevant point.  Because building a narrow-AI
> system in an AGI-compatible way is HARDER than building that same
> narrow-AI component in a non-AGI-compatible way.

Even if this is the case (which is not) that would simply mean several
development steps:
1) Develop narrow AI with non-reusable AI component and get rewarded
for that (because it would be useful system by itself).
2) Refactor non-reusable AI component into reusable AI component and
get rewarded for that (because it would reusable component for sale).
3) Apply reusable AI component in AGI and get rewarded for that.

If you were analyzing effectiveness of reward systems -- you would
notice, that systems (humans, animals, or machines) that are rewarded
immediately for positive contribution perform considerably better than
systems with reward distributed long after successful accomplishments.


> So, given the pressures of commerce and academia, people who are
> motivated to make narrow-AI for its own sake, will almost never create
> narrow-AI components that are useful for AGI.

Sorry, but that does not match with how things really work.
So far only researchers/developers who picked narrow-AI approach
accomplished something useful for AGI.
E.g.: Google, computer languages, network protocols, databases.

Pure AGI researchers contributed nothing, but disappointments in AI
ideas.



>> Would you agree that splitting very complex and big project into
>> meaningful parts considerably improves chances of success?

> Yes, sure ... but demanding that these meaningful parts

> -- be economically viable

> and/or

> -- beat competing, somewhat-similar components in competitions

> dramatically DECREASES chances of success ...

INCREASES chances of success. Dramatically.
There are lots of examples supporting it both in AI research field and
in virtually every area of human research.



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=70646629-5088c0

Reply via email to