> > > On 5/6/08, Stephen Reed <[EMAIL PROTECTED]> wrote:
> > > > I believe the opposite of what you say  I hope that my following 
> > > > explanation will help converge our thinking.  Let me first emphasize 
> > > > that I plan a vast multitude of specialized agencies, in which each 
> > > > agency has a particular mission.  This pattern is adopted from human 
> > > > agencies.  For example, an human advertising agency has as its mission 
> > > > the preparation of advertising media for its customers.  Agents, who 
> > > > are governed by the agency, fulfill its mission by carrying out 
> > > > commanded tasks, responding to perceived events, reporting to superiors 
> > > > and controlling subordinates.


If the agents have common sense, they can use their common sense to
broker capabilities among themselves, but that is begging the question
because we don't have commonsense AGI yet.

A more interesting possibility is whether we can spawn a large number
of very *weak* intelligent agents over the net, who don't have common
sense, and let commonsense emerge out of them.

It seems possible, but we'll need to design distributive algorithms
for reasoning, especially distributive deduction...

YKY

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to