On Thu, Jan 1, 2009 at 3:05 PM, Jim Bromer <jimbro...@gmail.com> wrote:
> On Mon, Dec 29, 2008 at 4:02 PM, Richard Loosemore <r...@lightlink.com> wrote:
>>  My friend Mike Oaksford in the UK has written several
>> papers giving a higher level cognitive theory that says that people are, in
>> fact, doing something like bayesian estimation when then make judgments.  In
>> fact, people are very good at being bayesians, contra the loud protests of
>> the I Am A Bayesian Rationalist crowd, who think they were the first to do
>> it.
>> Richard Loosemore
>
> That sounds like an easy hypothesis to test.  Except for a problem.
> Previous learning would be relevant to the solving of the problems and
> would produce results that could not be totally accounted for.
> Complexity, in the complicated sense of the term, is relevant to this
> problem, both in the complexity of how previous learning that might
> influence decision making and the possible (likely) complexity of the
> process of judgment itself.
>
> If extensive tests showed that people overwhelmingly made judgments
> that were Bayesianesque then this conjecture would be important.  The
> problem is, that since the numerous possible influences of previous
> learning has to be ruled out, I would suspect that any test for
> Bayesian-like reasoning would have to be kept so simple that it would
> not add anything new to our knowledge.
>
> If judgment was that simple most of the programmers in this list would
> have really great AGI programs by now, because simple weighted
> decision making is really easy to program.  The problem occurs when
> you realize that it is just not that easy.
>
> I think Anderson was the first to advocate weighted decision making in
> AI and my recollection is that he was writing his theories back in the
> 1970's.
>
> Jim Bromer

One other thing.  My interest in studies of cognitive science is how
the results of some study might be related to advanced AI, what is
called AGI in this group.  The use of weighted reasoning seems
attractive and if these kinds of methods do actually conform to some
cognitive processes then that would be a tremendous justification for
their use in AGI projects - along with other methods that would be
necessary to actually simulate or produce conceptually integrated
judgement.

But, one of the major design problems with tests that use statistical
methods to demonstrate that some cognitive function of reasoning seems
to conform with statistical processes is that since the artifacts of
the statistical method itself may obscure the results, the design of
the sample has to be called into question and the proposition
restudied using other design models capable of accounting for possible
sources of artifact error.
Jim Bromer


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com

Reply via email to