In response to my message, where I said,
"What is wrong with the AI-probability group mind-set is that very few
of its proponents ever consider the problem of statistical ambiguity
and its obvious consequences."
Abram noted,
"The "AI-probability group" definitely considers such problems.
There is a large body of literature on avoiding overfitting, ie,
finding patterns that work for more then just the data at hand."

Suppose I responded with a remark like,
6341/6344 wrong Abram...

A remark like this would be absurd because it lacks reference,
explanation and validity while also presenting a comically false
numerical precision for its otherwise inherent meaninglessness.

Where does the ratio 6341/6344 come from?  I did a search in ListBox
of all references to the word "overfitting" made in 2008 and found
that out of 6344 messages only 3 actually involved the discussion of
the word before Abram mentioned it today.  (I don't know how good
ListBox is for this sort of thing).

So what is wrong with my conclusion that Abram was 6341/6344 wrong?
Lots of things and they can all be described using declarative
statements.

First of all the idea that the conversations in this newsgroup
represent an adequate sampling of all ai-probability enthusiasts is
totally ridiculous.  Secondly, Abram's mention of overfitting was just
one example of how the general ai-probability community is aware of
the problem that I mentioned.  So while my statistical finding may be
tangentially relevant to the discussion, the presumption that it can
serve as a numerical evaluation of Abram's 'wrongness' in his response
is so absurd that it does not merit serious consideration.  My
skepticism then concerns the question of just how would a fully
automated AGI program that relied fully on probability methods be able
to avoid getting sucked into the vortex of such absurd mushy reasoning
if it wasn't also able to analyze the declarative inferences of its
application of statistical methods?

I believe that an AI program that is to be capable of advanced AGI has
to be capable of declarative assessment to work with any other
mathematical methods of reasoning it is programmed with.

The ability to reason about declarative knowledge does not necessarily
have to be done in text or something like that.  That is not what I
mean.  What I really mean is that an effective AI program is going to
have to be capable of some kind of referential analysis of events in
the IO data environment using methods other than probability.  But if
it is to attain higher intellectual functions it has to be done in a
creative and imaginative way.

Just as human statisticians have to be able to express and analyze the
application of their statistical methods using declarative statements
that refer to the data subject fields and the methods used, an AI
program that is designed to utilize automated probability reasoning to
attain greater general success is going to have to be able to express
and analyze its statistical assessments in terms of some kind of
declarative methods as well.

Jim Bromer


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com

Reply via email to