On Mon, Dec 29, 2008 at 4:02 PM, Richard Loosemore <r...@lightlink.com> wrote:
>  My friend Mike Oaksford in the UK has written several
> papers giving a higher level cognitive theory that says that people are, in
> fact, doing something like bayesian estimation when then make judgments.  In
> fact, people are very good at being bayesians, contra the loud protests of
> the I Am A Bayesian Rationalist crowd, who think they were the first to do
> it.
> Richard Loosemore

That sounds like an easy hypothesis to test.  Except for a problem.
Previous learning would be relevant to the solving of the problems and
would produce results that could not be totally accounted for.
Complexity, in the complicated sense of the term, is relevant to this
problem, both in the complexity of how previous learning that might
influence decision making and the possible (likely) complexity of the
process of judgment itself.

If extensive tests showed that people overwhelmingly made judgments
that were Bayesianesque then this conjecture would be important.  The
problem is, that since the numerous possible influences of previous
learning has to be ruled out, I would suspect that any test for
Bayesian-like reasoning would have to be kept so simple that it would
not add anything new to our knowledge.

If judgment was that simple most of the programmers in this list would
have really great AGI programs by now, because simple weighted
decision making is really easy to program.  The problem occurs when
you realize that it is just not that easy.

I think Anderson was the first to advocate weighted decision making in
AI and my recollection is that he was writing his theories back in the
1970's.

Jim Bromer


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com

Reply via email to