Re: [agi] Mushed Up Decision Processes

2008-11-29 Thread Abram Demski
Jim,

There is a large body of literature on avoiding overfitting, ie,
finding patterns that work for more then just the data at hand. Of
course, the ultimate conclusion is that you can never be 100% sure;
but some interesting safeguards have been cooked up anyway, which help
in practice.

My point is, the following paragraph is unfounded:

> This is a problem any AI method has to deal with, it is not just a
> probability thing.  What is wrong with the AI-probability group
> mind-set is that very few of its proponents ever consider the problem
> of statistical ambiguity and its obvious consequences.

The "AI-probability group" definitely considers such problems.

--Abram

On Sat, Nov 29, 2008 at 10:48 AM, Jim Bromer <[EMAIL PROTECTED]> wrote:
> One of the problems that comes with the casual use of analytical
> methods is that the user becomes inured to their habitual misuse. When
> a casual familiarity is combined with a habitual ignorance of the
> consequences of a misuse the user can become over-confident or
> unwisely dismissive of criticism regardless of how on the mark it
> might be.
>
> The most proper use of statistical and probabilistic methods is to
> base results on a strong association with the data that they were
> derived from.  The problem is that the AI community cannot afford this
> strong a connection to original source because they are trying to
> emulate the mind in some way and it is not reasonable to assume that
> the mind is capable of storing all data that it has used to derive
> insight.
>
> This is a problem any AI method has to deal with, it is not just a
> probability thing.  What is wrong with the AI-probability group
> mind-set is that very few of its proponents ever consider the problem
> of statistical ambiguity and its obvious consequences.
>
> All AI programmers have to consider the problem.  Most theories about
> the mind posit the use of similar experiences to build up theories
> about the world (or to derive methods to deal effectively with the
> world).  So even though the methods to deal with the data environment
> are detached from the original sources of those methods, they can
> still be reconnected by the examination of similar experiences that
> may subsequently occur.
>
> But still it is important to be able to recognize the significance and
> necessity of doing this from time to time.  It is important to be able
> to reevaluate parts of your theories about things.  We are not just
> making little modifications from our internal theories about things
> when we react to ongoing events, we must be making some sort of
> reevaluation of our insights about the kind of thing that we are
> dealing with as well.
>
> I realize now that most people in these groups probably do not
> understand where I am coming from because their idea of AI programming
> is based on a model of programming that is flat.  You have the program
> at one level and the possible reactions to the data that is input as
> the values of the program variables are carefully constrained by that
> level.  You can imagine a more complex model of programming by
> appreciating the possibility that the program can react to IO data by
> rearranging subprograms to make new kinds of programs.  Although a
> subtle argument can be made that any program that conditionally reacts
> to input data is rearranging the execution of its subprograms, the
> explicit recognition by the programmer that this is useful tool in
> advanced programming is probably highly correlated with its more
> effective use.  (I mean of course it is highly correlated with its
> effective use!)  I believe that casually constructed learning methods
> (and decision processes) can lead to even more uncontrollable results
> when used with this self-programming aspect of advanced AI programs.
>
> The consequences then of failing to recognize that mushed up decision
> processes that are never compared against the data (or kinds of
> situations) that they were derived from will be the inevitable
> emergence of inherently illogical decision processes that will mush up
> an AI system long before it gets any traction.
>
> Jim Bromer
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: [agi] Mushed Up Decision Processes

2008-11-29 Thread Ed Porter
Jim

My understanding is that a Novamente-like system would have a process of
natural selection that tends to favor the retention and use of patterns
(perceptive, cognative, behaviors) prove themselves useful in achieving
goals in the word in which it is embodied.

It seems to me t such a process of natural selection would tend to naturally
put some sort of limit on how out-of-touch many of an AGI's patterns would
be, at least with regard to patterns about things for which the AGI has had
considerable experience from the world in which it is embodied.

However, we humans often get pretty out of touch with real world
probabilities, as the recent bubble in housing prices, and the commonly
said, although historically inaccurate, statement of several years ago ---
that housing prices never go down on a national --- shows.

It would be helpful to make AGI's be a little more accurate in their
evaluation of the evidence for many of their assumptions --- and what that
evidence really says --- than we humans are.

Ed Porter

-Original Message-
From: Jim Bromer [mailto:[EMAIL PROTECTED] 
Sent: Saturday, November 29, 2008 10:49 AM
To: agi@v2.listbox.com
Subject: [agi] Mushed Up Decision Processes

One of the problems that comes with the casual use of analytical
methods is that the user becomes inured to their habitual misuse. When
a casual familiarity is combined with a habitual ignorance of the
consequences of a misuse the user can become over-confident or
unwisely dismissive of criticism regardless of how on the mark it
might be.

The most proper use of statistical and probabilistic methods is to
base results on a strong association with the data that they were
derived from.  The problem is that the AI community cannot afford this
strong a connection to original source because they are trying to
emulate the mind in some way and it is not reasonable to assume that
the mind is capable of storing all data that it has used to derive
insight.

This is a problem any AI method has to deal with, it is not just a
probability thing.  What is wrong with the AI-probability group
mind-set is that very few of its proponents ever consider the problem
of statistical ambiguity and its obvious consequences.

All AI programmers have to consider the problem.  Most theories about
the mind posit the use of similar experiences to build up theories
about the world (or to derive methods to deal effectively with the
world).  So even though the methods to deal with the data environment
are detached from the original sources of those methods, they can
still be reconnected by the examination of similar experiences that
may subsequently occur.

But still it is important to be able to recognize the significance and
necessity of doing this from time to time.  It is important to be able
to reevaluate parts of your theories about things.  We are not just
making little modifications from our internal theories about things
when we react to ongoing events, we must be making some sort of
reevaluation of our insights about the kind of thing that we are
dealing with as well.

I realize now that most people in these groups probably do not
understand where I am coming from because their idea of AI programming
is based on a model of programming that is flat.  You have the program
at one level and the possible reactions to the data that is input as
the values of the program variables are carefully constrained by that
level.  You can imagine a more complex model of programming by
appreciating the possibility that the program can react to IO data by
rearranging subprograms to make new kinds of programs.  Although a
subtle argument can be made that any program that conditionally reacts
to input data is rearranging the execution of its subprograms, the
explicit recognition by the programmer that this is useful tool in
advanced programming is probably highly correlated with its more
effective use.  (I mean of course it is highly correlated with its
effective use!)  I believe that casually constructed learning methods
(and decision processes) can lead to even more uncontrollable results
when used with this self-programming aspect of advanced AI programs.

The consequences then of failing to recognize that mushed up decision
processes that are never compared against the data (or kinds of
situations) that they were derived from will be the inevitable
emergence of inherently illogical decision processes that will mush up
an AI system long before it gets any traction.

Jim Bromer


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/m

Re: [agi] Mushed Up Decision Processes

2008-11-29 Thread Steve Richfield
Jim,

YES - and I think I have another piece of your puzzle to consider...

A longtime friend of mine, Dave,  went on to become a PhD psychologist, who
subsequently took me on as a sort of "project" - to figure out why most
people who met me then either greatly valued my friendship, or quite the
opposite, would probably kill me if they had the safe opportunity. After
much discussion, interviewing people in both camps, etc., he came up with
what appears to be a key to decision making in general...

It appears that people "pigeonhole" other people, concepts, situations,
etc., into a very finite number of pigeonholes - probably just tens of
pigeonholes for other people. Along with the pigeonhole, they keep
amendments, like "Steve is like Joe, but with ...".

Then, there is the pigeonhole labeled "other" that all the mavericks are
thrown into. Not being at all like anyone else that most people have ever
met, I was invariably filed into the "other" pigeonhole, along with
Einstein, Ted Bundy, Jack the Ripper, Stephen Hawking, etc.

People are "safe" to the extent that they are predictable, and people in the
"other" pigeonhole got that way because they appear to NOT be predictable,
e.g. because of their worldview, etc. Now, does the potential value of the
alternative worldview outweigh the potential danger of perceived
unpredictability? The answer to this question apparently drove my own
personal classification in other people.

Dave's goal was to devise a way to stop making enemies, but unfortunately,
this model of how people got that way suggested no potential solution.
People who keep themselves safe from others having radically different
worldviews are truly in a mental prison of their own making, and there is no
way that someone whom they distrust could ever release them from that
prison.

I suspect that recognition, decision making, and all sorts of "intelligent"
processes may be proceeding in much the same way. There may be no
"grandmother" neuron/pidgeonhole, but rather a "kindly old person" with an
amendment that "is related". If on the other hand your other grandmother
flogged you as a child, the filing might be quite different.

Any thoughts?

Steve Richfield

On 11/29/08, Jim Bromer <[EMAIL PROTECTED]> wrote:
>
> One of the problems that comes with the casual use of analytical
> methods is that the user becomes inured to their habitual misuse. When
> a casual familiarity is combined with a habitual ignorance of the
> consequences of a misuse the user can become over-confident or
> unwisely dismissive of criticism regardless of how on the mark it
> might be.
>
> The most proper use of statistical and probabilistic methods is to
> base results on a strong association with the data that they were
> derived from.  The problem is that the AI community cannot afford this
> strong a connection to original source because they are trying to
> emulate the mind in some way and it is not reasonable to assume that
> the mind is capable of storing all data that it has used to derive
> insight.
>
> This is a problem any AI method has to deal with, it is not just a
> probability thing.  What is wrong with the AI-probability group
> mind-set is that very few of its proponents ever consider the problem
> of statistical ambiguity and its obvious consequences.
>
> All AI programmers have to consider the problem.  Most theories about
> the mind posit the use of similar experiences to build up theories
> about the world (or to derive methods to deal effectively with the
> world).  So even though the methods to deal with the data environment
> are detached from the original sources of those methods, they can
> still be reconnected by the examination of similar experiences that
> may subsequently occur.
>
> But still it is important to be able to recognize the significance and
> necessity of doing this from time to time.  It is important to be able
> to reevaluate parts of your theories about things.  We are not just
> making little modifications from our internal theories about things
> when we react to ongoing events, we must be making some sort of
> reevaluation of our insights about the kind of thing that we are
> dealing with as well.
>
> I realize now that most people in these groups probably do not
> understand where I am coming from because their idea of AI programming
> is based on a model of programming that is flat.  You have the program
> at one level and the possible reactions to the data that is input as
> the values of the program variables are carefully constrained by that
> level.  You can imagine a more complex model of programming by
> appreciating the possibility that the program can react to IO data by
> rearranging subprograms to make new kinds of programs.  Although a
> subtle argument can be made that any program that conditionally reacts
> to input data is rearranging the execution of its subprograms, the
> explicit recognition by the programmer that this is useful tool in
> advanced 

Re: [agi] Mushed Up Decision Processes

2008-11-29 Thread Jim Bromer
Hi.  I will just make a quick response to this message and then I want
to think about the other messages before I reply.

A few weeks ago I decided that I would write a criticism of
ai-probability to post to this group.  I wasn't able remember all of
my criticisms so I decided to post a few preliminary sketches to
another group.  I wasn't too concerned about how they responded, and
in fact I thought they would just ignore me.  The first response I got
was from an irate guy who was quite unpleasant and then finished by
declaring that I slandered the entire ai-probability community!  He
had some reasonable criticisms about this but I considered the issue
tangential to the central issue I wanted to discuss. I would have
responded to his more reasonable criticisms if they hadn't been
embedded in his enraged rant.  I wondered why anyone would deface the
expression of his own thoughts with an emotional and hostile message,
so I wanted to try the same message on this group to see if anyone who
was more mature would focus on this same issue.

Abram made a measured response but his focus was on the
over-generalization.  As I said, this was just a preliminary sketch of
a message that I intended to post to this group after I had worked on
it.

Your point is taken.  Norvig seems to say that overfitting is a
general problem.  The  method given to study the problem is
probabilistic but it is based on the premise that the original data is
substantially intact.  But Norvig goes on to mention that with pruning
noise can be tolerated. If you read my message again you may see that
my central issue was not really centered on the issue of whether
anyone in the ai-probability community was aware of the nature of the
science of statistics but whether or not probability can be used as
the fundamental basis to create agi given the complexities of the
problem.  So while your example of overfitting certainly does deflate
my statements that no one in the ai-probability community gets this
stuff, it does not actually address the central issue that I was
thinking of.

I am not sure if Norvig's application of a probabilistic method to
detect overfitting is truly directed toward the agi community.  In
other words: Has anyone in this grouped tested the utility and clarity
of the decision making of a fully automated system to detect
overfitting in a range of complex IO data fields that one might expect
to encounter in AGI?

Jim Bromer



On Sat, Nov 29, 2008 at 11:32 AM, Abram Demski <[EMAIL PROTECTED]> wrote:
> Jim,
>
> There is a large body of literature on avoiding overfitting, ie,
> finding patterns that work for more then just the data at hand. Of
> course, the ultimate conclusion is that you can never be 100% sure;
> but some interesting safeguards have been cooked up anyway, which help
> in practice.
>
> My point is, the following paragraph is unfounded:
>
>> This is a problem any AI method has to deal with, it is not just a
>> probability thing.  What is wrong with the AI-probability group
>> mind-set is that very few of its proponents ever consider the problem
>> of statistical ambiguity and its obvious consequences.
>
> The "AI-probability group" definitely considers such problems.
>
> --Abram
>
> On Sat, Nov 29, 2008 at 10:48 AM, Jim Bromer <[EMAIL PROTECTED]> wrote:
>> One of the problems that comes with the casual use of analytical
>> methods is that the user becomes inured to their habitual misuse. When
>> a casual familiarity is combined with a habitual ignorance of the
>> consequences of a misuse the user can become over-confident or
>> unwisely dismissive of criticism regardless of how on the mark it
>> might be.
>>
>> The most proper use of statistical and probabilistic methods is to
>> base results on a strong association with the data that they were
>> derived from.  The problem is that the AI community cannot afford this
>> strong a connection to original source because they are trying to
>> emulate the mind in some way and it is not reasonable to assume that
>> the mind is capable of storing all data that it has used to derive
>> insight.
>>
>> This is a problem any AI method has to deal with, it is not just a
>> probability thing.  What is wrong with the AI-probability group
>> mind-set is that very few of its proponents ever consider the problem
>> of statistical ambiguity and its obvious consequences.
>>
>> All AI programmers have to consider the problem.  Most theories about
>> the mind posit the use of similar experiences to build up theories
>> about the world (or to derive methods to deal effectively with the
>> world).  So even though the methods to deal with the data environment
>> are detached from the original sources of those methods, they can
>> still be reconnected by the examination of similar experiences that
>> may subsequently occur.
>>
>> But still it is important to be able to recognize the significance and
>> necessity of doing this from time to time.  It is important to be able
>> to reevaluate p

Re: [agi] Mushed Up Decision Processes

2008-11-29 Thread Ben Goertzel
Well, if you're willing to take the step of asking questions about the
world that are framed in terms of probabilities and probability
distributions ... then modern probability and statistics tell you a
lot about overfitting and how to avoid it...

OTOH if, like Pei Wang, you think it's misguided to ask questions
posed in a probabilistic framework, then that theory will not be
directly relevant to you...

To me the big weaknesses of modern probability theory lie  in
**hypothesis generation** and **inference**.   Testing a hypothesis
against data, to see if it's overfit to that data, is handled well by
crossvalidation and related methods.

But the problem of: given a number of hypotheses with support from a
dataset, generating other interesting hypotheses that will also have
support from the dataset ... that is where traditional probabilistic
methods (though not IMO the foundational ideas of probability) fall
short, providing only unscalable or oversimplified solutions...

-- Ben G

On Sat, Nov 29, 2008 at 1:08 PM, Jim Bromer <[EMAIL PROTECTED]> wrote:
> Hi.  I will just make a quick response to this message and then I want
> to think about the other messages before I reply.
>
> A few weeks ago I decided that I would write a criticism of
> ai-probability to post to this group.  I wasn't able remember all of
> my criticisms so I decided to post a few preliminary sketches to
> another group.  I wasn't too concerned about how they responded, and
> in fact I thought they would just ignore me.  The first response I got
> was from an irate guy who was quite unpleasant and then finished by
> declaring that I slandered the entire ai-probability community!  He
> had some reasonable criticisms about this but I considered the issue
> tangential to the central issue I wanted to discuss. I would have
> responded to his more reasonable criticisms if they hadn't been
> embedded in his enraged rant.  I wondered why anyone would deface the
> expression of his own thoughts with an emotional and hostile message,
> so I wanted to try the same message on this group to see if anyone who
> was more mature would focus on this same issue.
>
> Abram made a measured response but his focus was on the
> over-generalization.  As I said, this was just a preliminary sketch of
> a message that I intended to post to this group after I had worked on
> it.
>
> Your point is taken.  Norvig seems to say that overfitting is a
> general problem.  The  method given to study the problem is
> probabilistic but it is based on the premise that the original data is
> substantially intact.  But Norvig goes on to mention that with pruning
> noise can be tolerated. If you read my message again you may see that
> my central issue was not really centered on the issue of whether
> anyone in the ai-probability community was aware of the nature of the
> science of statistics but whether or not probability can be used as
> the fundamental basis to create agi given the complexities of the
> problem.  So while your example of overfitting certainly does deflate
> my statements that no one in the ai-probability community gets this
> stuff, it does not actually address the central issue that I was
> thinking of.
>
> I am not sure if Norvig's application of a probabilistic method to
> detect overfitting is truly directed toward the agi community.  In
> other words: Has anyone in this grouped tested the utility and clarity
> of the decision making of a fully automated system to detect
> overfitting in a range of complex IO data fields that one might expect
> to encounter in AGI?
>
> Jim Bromer
>
>
>
> On Sat, Nov 29, 2008 at 11:32 AM, Abram Demski <[EMAIL PROTECTED]> wrote:
>> Jim,
>>
>> There is a large body of literature on avoiding overfitting, ie,
>> finding patterns that work for more then just the data at hand. Of
>> course, the ultimate conclusion is that you can never be 100% sure;
>> but some interesting safeguards have been cooked up anyway, which help
>> in practice.
>>
>> My point is, the following paragraph is unfounded:
>>
>>> This is a problem any AI method has to deal with, it is not just a
>>> probability thing.  What is wrong with the AI-probability group
>>> mind-set is that very few of its proponents ever consider the problem
>>> of statistical ambiguity and its obvious consequences.
>>
>> The "AI-probability group" definitely considers such problems.
>>
>> --Abram
>>
>> On Sat, Nov 29, 2008 at 10:48 AM, Jim Bromer <[EMAIL PROTECTED]> wrote:
>>> One of the problems that comes with the casual use of analytical
>>> methods is that the user becomes inured to their habitual misuse. When
>>> a casual familiarity is combined with a habitual ignorance of the
>>> consequences of a misuse the user can become over-confident or
>>> unwisely dismissive of criticism regardless of how on the mark it
>>> might be.
>>>
>>> The most proper use of statistical and probabilistic methods is to
>>> base results on a strong association with the data that they were
>>> de

Re: [agi] Mushed Up Decision Processes

2008-11-29 Thread Matt Mahoney
--- On Sat, 11/29/08, Jim Bromer <[EMAIL PROTECTED]> wrote:

> I am not sure if Norvig's application of a probabilistic method to
> detect overfitting is truly directed toward the agi community.  In
> other words: Has anyone in this grouped tested the utility and clarity
> of the decision making of a fully automated system to detect
> overfitting in a range of complex IO data fields that one might expect
> to encounter in AGI?

The general problem of detecting overfitting is not computable. The principle 
according to Occam's Razor, formalized and proven by Hutter's AIXI model, is to 
choose the shortest program (simplest hypothesis) that generates the data. 
Overfitting is the case of choosing a program that is too large.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Mushed Up Decision Processes

2008-11-29 Thread Charles Hixson

A response to:

"I wondered why anyone would deface the
expression of his own thoughts with an emotional and hostile message,"

My theory is that thoughts are generated internally and forced into words via a 
babble generator.  Then the thoughts are filtered through a screen to remove 
any that don't match ones intent, that don't make sense, etc.  The value 
assigned to each expression is initially dependent on how well it expresses 
one's emotional tenor.

Therefore I would guess that all of the verbalizations that the individual 
generated which passed the first screen were hostile in nature.  From the 
remaining sample he filtered those which didn't generate sensible-to-him 
scenarios when fed back into his world model.  This left him with a much 
reduced selection of phrases to choose from when composing his response.

In my model this happens a phrase at a time rather than a sentence at a time.  
And there is also a probabilistic element where each word has a certain 
probability of being followed by divers other words.  I often don't want to 
express the most likely probability, as by choosing a less frequently chosen 
alternative I (believe I) create the impression a more studied, i.e. 
thoughtful, response.  But if one wishes to convey a more dynamic style then 
one would choose a more likely follower.

Note that in this scenario phrases are generated both randomly and in parallel. 
 Then they are selected for fitness for expression by passing through various 
filter.

Reasonable?


Jim Bromer wrote:

Hi.  I will just make a quick response to this message and then I want
to think about the other messages before I reply.

A few weeks ago I decided that I would write a criticism of
ai-probability to post to this group.  I wasn't able remember all of
my criticisms so I decided to post a few preliminary sketches to
another group.  I wasn't too concerned about how they responded, and
in fact I thought they would just ignore me.  The first response I got
was from an irate guy who was quite unpleasant and then finished by
declaring that I slandered the entire ai-probability community!  He
had some reasonable criticisms about this but I considered the issue
tangential to the central issue I wanted to discuss. I would have
responded to his more reasonable criticisms if they hadn't been
embedded in his enraged rant.  I wondered why anyone would deface the
expression of his own thoughts with an emotional and hostile message,
so I wanted to try the same message on this group to see if anyone who
was more mature would focus on this same issue.

Abram made a measured response but his focus was on the
over-generalization.  As I said, this was just a preliminary sketch of
a message that I intended to post to this group after I had worked on
it.

Your point is taken.  Norvig seems to say that overfitting is a
general problem.  The  method given to study the problem is
probabilistic but it is based on the premise that the original data is
substantially intact.  But Norvig goes on to mention that with pruning
noise can be tolerated. If you read my message again you may see that
my central issue was not really centered on the issue of whether
anyone in the ai-probability community was aware of the nature of the
science of statistics but whether or not probability can be used as
the fundamental basis to create agi given the complexities of the
problem.  So while your example of overfitting certainly does deflate
my statements that no one in the ai-probability community gets this
stuff, it does not actually address the central issue that I was
thinking of.

I am not sure if Norvig's application of a probabilistic method to
detect overfitting is truly directed toward the agi community.  In
other words: Has anyone in this grouped tested the utility and clarity
of the decision making of a fully automated system to detect
overfitting in a range of complex IO data fields that one might expect
to encounter in AGI?

Jim Bromer



On Sat, Nov 29, 2008 at 11:32 AM, Abram Demski <[EMAIL PROTECTED]> wrote:
  

Jim,

There is a large body of literature on avoiding overfitting, ie,
finding patterns that work for more then just the data at hand. Of
course, the ultimate conclusion is that you can never be 100% sure;
but some interesting safeguards have been cooked up anyway, which help
in practice.

My point is, the following paragraph is unfounded:



This is a problem any AI method has to deal with, it is not just a
probability thing.  What is wrong with the AI-probability group
mind-set is that very few of its proponents ever consider the problem
of statistical ambiguity and its obvious consequences.
  

The "AI-probability group" definitely considers such problems.

--Abram

On Sat, Nov 29, 2008 at 10:48 AM, Jim Bromer <[EMAIL PROTECTED]> wrote:


One of the problems that comes with the casual use of analytical
methods is that the user becomes inured to their habitual misuse. When
a casual familiarity is combined with

Re: [agi] Mushed Up Decision Processes

2008-11-29 Thread Jim Bromer
In response to my message, where I said,
"What is wrong with the AI-probability group mind-set is that very few
of its proponents ever consider the problem of statistical ambiguity
and its obvious consequences."
Abram noted,
"The "AI-probability group" definitely considers such problems.
There is a large body of literature on avoiding overfitting, ie,
finding patterns that work for more then just the data at hand."

Suppose I responded with a remark like,
6341/6344 wrong Abram...

A remark like this would be absurd because it lacks reference,
explanation and validity while also presenting a comically false
numerical precision for its otherwise inherent meaninglessness.

Where does the ratio 6341/6344 come from?  I did a search in ListBox
of all references to the word "overfitting" made in 2008 and found
that out of 6344 messages only 3 actually involved the discussion of
the word before Abram mentioned it today.  (I don't know how good
ListBox is for this sort of thing).

So what is wrong with my conclusion that Abram was 6341/6344 wrong?
Lots of things and they can all be described using declarative
statements.

First of all the idea that the conversations in this newsgroup
represent an adequate sampling of all ai-probability enthusiasts is
totally ridiculous.  Secondly, Abram's mention of overfitting was just
one example of how the general ai-probability community is aware of
the problem that I mentioned.  So while my statistical finding may be
tangentially relevant to the discussion, the presumption that it can
serve as a numerical evaluation of Abram's 'wrongness' in his response
is so absurd that it does not merit serious consideration.  My
skepticism then concerns the question of just how would a fully
automated AGI program that relied fully on probability methods be able
to avoid getting sucked into the vortex of such absurd mushy reasoning
if it wasn't also able to analyze the declarative inferences of its
application of statistical methods?

I believe that an AI program that is to be capable of advanced AGI has
to be capable of declarative assessment to work with any other
mathematical methods of reasoning it is programmed with.

The ability to reason about declarative knowledge does not necessarily
have to be done in text or something like that.  That is not what I
mean.  What I really mean is that an effective AI program is going to
have to be capable of some kind of referential analysis of events in
the IO data environment using methods other than probability.  But if
it is to attain higher intellectual functions it has to be done in a
creative and imaginative way.

Just as human statisticians have to be able to express and analyze the
application of their statistical methods using declarative statements
that refer to the data subject fields and the methods used, an AI
program that is designed to utilize automated probability reasoning to
attain greater general success is going to have to be able to express
and analyze its statistical assessments in terms of some kind of
declarative methods as well.

Jim Bromer


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Mushed Up Decision Processes

2008-11-29 Thread Jim Bromer
On Sat, Nov 29, 2008 at 1:51 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> To me the big weaknesses of modern probability theory lie  in
> **hypothesis generation** and **inference**.   Testing a hypothesis
> against data, to see if it's overfit to that data, is handled well by
> crossvalidation and related methods.
>
> But the problem of: given a number of hypotheses with support from a
> dataset, generating other interesting hypotheses that will also have
> support from the dataset ... that is where traditional probabilistic
> methods (though not IMO the foundational ideas of probability) fall
> short, providing only unscalable or oversimplified solutions...
>
> -- Ben G

Could you give me a little more detail about your thoughts on this?
Do you think the problem of increasing uncomputableness of complicated
complexity is the common thread found in all of the interesting,
useful but unscalable methods of AI?
Jim Bromer


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Mushed Up Decision Processes

2008-11-29 Thread Ben Goertzel
Whether an AI needs to explicitly manipulate declarative statements is
a deep question ... it may be that other dynamics that are in some
contexts implicitly equivalent to this sort of manipulation will
suffice

But anyway, there is no contradiction between manipulating explicit
declarative statements and using probability theory.

Some of my colleagues and I spent a bunch of time during the last few
years figuring out nice ways to combine probability theory and formal
logic.  In fact there are "Progic" workshops every year exploring
these sorts of themes.

So, while the mainstream of probability-focused AI theorists aren't
doing hard-core probabilistic logic, some researchers certainly are...

I've been displeased with the wimpiness of the progic subfield, and
its lack of contribution to areas like inference with nested
quantifiers, and intensional inference ... and I've tried to remedy
these shortcomings with PLN (Probabilistic Logic Networks) ...

So, I think it's correct to criticize the mainstream of
probability-focused AI theorists for not doing AGI ;-) ... but I don't
think they've overlooking basic issues like overfitting and such ... I
think they're just focusing on relatively easy problems where (unlike
if you want to do explicitly probability theory based AGI) you don't
need to merge probability theory with complex logical constructs...

ben

On Sat, Nov 29, 2008 at 9:15 PM, Jim Bromer <[EMAIL PROTECTED]> wrote:
> In response to my message, where I said,
> "What is wrong with the AI-probability group mind-set is that very few
> of its proponents ever consider the problem of statistical ambiguity
> and its obvious consequences."
> Abram noted,
> "The "AI-probability group" definitely considers such problems.
> There is a large body of literature on avoiding overfitting, ie,
> finding patterns that work for more then just the data at hand."
>
> Suppose I responded with a remark like,
> 6341/6344 wrong Abram...
>
> A remark like this would be absurd because it lacks reference,
> explanation and validity while also presenting a comically false
> numerical precision for its otherwise inherent meaninglessness.
>
> Where does the ratio 6341/6344 come from?  I did a search in ListBox
> of all references to the word "overfitting" made in 2008 and found
> that out of 6344 messages only 3 actually involved the discussion of
> the word before Abram mentioned it today.  (I don't know how good
> ListBox is for this sort of thing).
>
> So what is wrong with my conclusion that Abram was 6341/6344 wrong?
> Lots of things and they can all be described using declarative
> statements.
>
> First of all the idea that the conversations in this newsgroup
> represent an adequate sampling of all ai-probability enthusiasts is
> totally ridiculous.  Secondly, Abram's mention of overfitting was just
> one example of how the general ai-probability community is aware of
> the problem that I mentioned.  So while my statistical finding may be
> tangentially relevant to the discussion, the presumption that it can
> serve as a numerical evaluation of Abram's 'wrongness' in his response
> is so absurd that it does not merit serious consideration.  My
> skepticism then concerns the question of just how would a fully
> automated AGI program that relied fully on probability methods be able
> to avoid getting sucked into the vortex of such absurd mushy reasoning
> if it wasn't also able to analyze the declarative inferences of its
> application of statistical methods?
>
> I believe that an AI program that is to be capable of advanced AGI has
> to be capable of declarative assessment to work with any other
> mathematical methods of reasoning it is programmed with.
>
> The ability to reason about declarative knowledge does not necessarily
> have to be done in text or something like that.  That is not what I
> mean.  What I really mean is that an effective AI program is going to
> have to be capable of some kind of referential analysis of events in
> the IO data environment using methods other than probability.  But if
> it is to attain higher intellectual functions it has to be done in a
> creative and imaginative way.
>
> Just as human statisticians have to be able to express and analyze the
> application of their statistical methods using declarative statements
> that refer to the data subject fields and the methods used, an AI
> program that is designed to utilize automated probability reasoning to
> attain greater general success is going to have to be able to express
> and analyze its statistical assessments in terms of some kind of
> declarative methods as well.
>
> Jim Bromer
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Re

Re: [agi] Mushed Up Decision Processes

2008-11-29 Thread Ben Goertzel
> Could you give me a little more detail about your thoughts on this?
> Do you think the problem of increasing uncomputableness of complicated
> complexity is the common thread found in all of the interesting,
> useful but unscalable methods of AI?
> Jim Bromer

Well, I think that dealing with combinatorial explosions is, in
general, the great unsolved problem of AI. I think the opencog prime
design can solve it, but this isn't proved yet...

Even relatively unambitious AI methods tend to get dumbed down further
when you try to scale them up, due to combinatorial explosion issues.
For instance, Bayes nets aren't that clever to begin with ... they
don't do that much ... but to make them scalable, one has to make them
even more limited and basically ignore combinational causes and just
look at causes between one isolated event-class and another...

And of course, all theorem provers are unscalable due to having no
scalable methods of inference tree pruning...

Evolutionary methods can't handle complex fitness functions because
they'd require overly large population sizes...

In general, the standard AI methods can't handle pattern recognition
problems requiring finding complex interdependencies among multiple
variables that are obscured among scads of other variables

The human mind seems to do this via building up intuition via drawing
analogies among multiple problems it confronts during its history.
Also of course the human mind builds internal simulations of the
world, and probes these simulations and draws analogies from problems
it solved in its inner sim world, to problems it encounters in the
outer world...

etc. etc. etc.

ben


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Mushed Up Decision Processes

2008-11-29 Thread Jim Bromer
On Sat, Nov 29, 2008 at 11:53 AM, Steve Richfield
<[EMAIL PROTECTED]> wrote:
> Jim,
>
> YES - and I think I have another piece of your puzzle to consider...
>
> A longtime friend of mine, Dave,  went on to become a PhD psychologist, who
> subsequently took me on as a sort of "project" - to figure out why most
> people who met me then either greatly valued my friendship, or quite the
> opposite, would probably kill me if they had the safe opportunity. After
> much discussion, interviewing people in both camps, etc., he came up with
> what appears to be a key to decision making in general...
>
> It appears that people "pigeonhole" other people, concepts, situations,
> etc., into a very finite number of pigeonholes - probably just tens of
> pigeonholes for other people.


Steve:
I found that I used a similar method of categorizing people who I
talked to on these newsgroups.  I wouldn't call it pigeonholing
though. (Actually, I wouldn't call anything pigeonholing, but that is
just me.)  I would rely on a handful of generalizations that I thought
were applicable to different people who tended to exhibit some common
characteristics.  However, when I discovered that an individual who I
thought I understood had another facet to his personality or thoughts
that I hadn't seen before I often found that I had to apply another
categorical generality to my impression of him.  I soon built up
generalization categories based on different experiences with
different kinds of people, and I eventually realized that although I
often saw similar kinds of behaviors in different people, each person
seemed to be comprised of different sets (or different strengths) of
the various component characteristics that I derived to recall my
experiences with people in these groups.  So I came to similar
conclusions that you and your friend came to.

An interesting thing about talking to reactive people in these
discussion groups.  I found that by eliminating more and more affect
from my comments, by refraining from personal comments, innuendos or
making meta-discussion analyses and by increasingly emphasizing
objectivity in my comments I could substantially reduce any hostility
directed at me.  My problem is that I do not want to remove all affect
from my conversation just to placate some unpleasant person.  But I
guess I should start using that technique again when necessary.

Jim Bromer


On Sat, Nov 29, 2008 at 11:53 AM, Steve Richfield
<[EMAIL PROTECTED]> wrote:
> Jim,
>
> YES - and I think I have another piece of your puzzle to consider...
>
> A longtime friend of mine, Dave,  went on to become a PhD psychologist, who
> subsequently took me on as a sort of "project" - to figure out why most
> people who met me then either greatly valued my friendship, or quite the
> opposite, would probably kill me if they had the safe opportunity. After
> much discussion, interviewing people in both camps, etc., he came up with
> what appears to be a key to decision making in general...
>
> It appears that people "pigeonhole" other people, concepts, situations,
> etc., into a very finite number of pigeonholes - probably just tens of
> pigeonholes for other people. Along with the pigeonhole, they keep
> amendments, like "Steve is like Joe, but with ...".
>
> Then, there is the pigeonhole labeled "other" that all the mavericks are
> thrown into. Not being at all like anyone else that most people have ever
> met, I was invariably filed into the "other" pigeonhole, along with
> Einstein, Ted Bundy, Jack the Ripper, Stephen Hawking, etc.
>
> People are "safe" to the extent that they are predictable, and people in the
> "other" pigeonhole got that way because they appear to NOT be predictable,
> e.g. because of their worldview, etc. Now, does the potential value of the
> alternative worldview outweigh the potential danger of perceived
> unpredictability? The answer to this question apparently drove my own
> personal classification in other people.
>
> Dave's goal was to devise a way to stop making enemies, but unfortunately,
> this model of how people got that way suggested no potential solution.
> People who keep themselves safe from others having radically different
> worldviews are truly in a mental prison of their own making, and there is no
> way that someone whom they distrust could ever release them from that
> prison.
>
> I suspect that recognition, decision making, and all sorts of "intelligent"
> processes may be proceeding in much the same way. There may be no
> "grandmother" neuron/pidgeonhole, but rather a "kindly old person" with an
> amendment that "is related". If on the other hand your other grandmother
> flogged you as a child, the filing might be quite different.
>
> Any thoughts?
>
> Steve Richfield
> 
> On 11/29/08, Jim Bromer <[EMAIL PROTECTED]> wrote:
>>
>> One of the problems that comes with the casual use of analytical
>> methods is that the user becomes inured to their habitual misuse. When
>> a casual familiarity i

Re: [agi] Mushed Up Decision Processes

2008-11-30 Thread Philip Hunt
2008/11/30 Ben Goertzel <[EMAIL PROTECTED]>:
>> Could you give me a little more detail about your thoughts on this?
>> Do you think the problem of increasing uncomputableness of complicated
>> complexity is the common thread found in all of the interesting,
>> useful but unscalable methods of AI?
>> Jim Bromer
>
> Well, I think that dealing with combinatorial explosions is, in
> general, the great unsolved problem of AI. I think the opencog prime
> design can solve it, but this isn't proved yet...

Good luck with that!

> In general, the standard AI methods can't handle pattern recognition
> problems requiring finding complex interdependencies among multiple
> variables that are obscured among scads of other variables
> The human mind seems to do this via building up intuition via drawing
> analogies among multiple problems it confronts during its history.

Yes, so that people learn one problem, then it helps them to learn
other similar ones. Is there any AI software that does this? I'm not
aware of any.

I have proposed a problem domain called "function predictor" whose
purpose is to allow an AI to learn across problem sub-domains,
carrying its learning from one domain to another. (See
http://www.includipedia.com/wiki/User:Cabalamat/Function_predictor )

I also think it would be useful if there was a regular (maybe annual)
competition in the function predictor domain (or some similar domain).
A bit like the Loebner Prize, except that it would be more useful to
the advancement of AI, since the Loebner prize is silly.

-- 
Philip Hunt, <[EMAIL PROTECTED]>
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Mushed Up Decision Processes

2008-11-30 Thread Ben Goertzel
Hi,

> I have proposed a problem domain called "function predictor" whose
> purpose is to allow an AI to learn across problem sub-domains,
> carrying its learning from one domain to another. (See
> http://www.includipedia.com/wiki/User:Cabalamat/Function_predictor )
>
> I also think it would be useful if there was a regular (maybe annual)
> competition in the function predictor domain (or some similar domain).
> A bit like the Loebner Prize, except that it would be more useful to
> the advancement of AI, since the Loebner prize is silly.
>
> --
> Philip Hunt, <[EMAIL PROTECTED]>

How does that differ from what is generally called "transfer learning" ?

ben g


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Mushed Up Decision Processes

2008-11-30 Thread Philip Hunt
2008/11/30 Ben Goertzel <[EMAIL PROTECTED]>:
> Hi,
>
>> I have proposed a problem domain called "function predictor" whose
>> purpose is to allow an AI to learn across problem sub-domains,
>> carrying its learning from one domain to another. (See
>> http://www.includipedia.com/wiki/User:Cabalamat/Function_predictor )
>>
>> I also think it would be useful if there was a regular (maybe annual)
>> competition in the function predictor domain (or some similar domain).
>> A bit like the Loebner Prize, except that it would be more useful to
>> the advancement of AI, since the Loebner prize is silly.
>>
>> --
>> Philip Hunt, <[EMAIL PROTECTED]>
>
> How does that differ from what is generally called "transfer learning" ?

I don't think it does differ. ("Transfer learning" is not a term I'd
previously come across).

-- 
Philip Hunt, <[EMAIL PROTECTED]>
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Mushed Up Decision Processes

2008-11-30 Thread Ben Goertzel
There was a DARPA program on "transfer learning" a few years back ...
I believe I applied and got rejected (with perfect marks on the
technical proposal, as usual ...) ... I never checked to see who got
the $$ and what they did with it...

ben g

On Sun, Nov 30, 2008 at 11:12 AM, Philip Hunt <[EMAIL PROTECTED]> wrote:
> 2008/11/30 Ben Goertzel <[EMAIL PROTECTED]>:
>> Hi,
>>
>>> I have proposed a problem domain called "function predictor" whose
>>> purpose is to allow an AI to learn across problem sub-domains,
>>> carrying its learning from one domain to another. (See
>>> http://www.includipedia.com/wiki/User:Cabalamat/Function_predictor )
>>>
>>> I also think it would be useful if there was a regular (maybe annual)
>>> competition in the function predictor domain (or some similar domain).
>>> A bit like the Loebner Prize, except that it would be more useful to
>>> the advancement of AI, since the Loebner prize is silly.
>>>
>>> --
>>> Philip Hunt, <[EMAIL PROTECTED]>
>>
>> How does that differ from what is generally called "transfer learning" ?
>
> I don't think it does differ. ("Transfer learning" is not a term I'd
> previously come across).
>
> --
> Philip Hunt, <[EMAIL PROTECTED]>
> Please avoid sending me Word or PowerPoint attachments.
> See http://www.gnu.org/philosophy/no-word-attachments.html
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"I intend to live forever, or die trying."
-- Groucho Marx


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Mushed Up Decision Processes

2008-11-30 Thread Pei Wang
On Sun, Nov 30, 2008 at 11:17 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> There was a DARPA program on "transfer learning" a few years back ...
> I believe I applied and got rejected (with perfect marks on the
> technical proposal, as usual ...) ... I never checked to see who got
> the $$ and what they did with it...

See http://www.cs.utexas.edu/~mtaylor/Publications/AGI08-taylor.pdf

Pei

> ben g
>
> On Sun, Nov 30, 2008 at 11:12 AM, Philip Hunt <[EMAIL PROTECTED]> wrote:
>> 2008/11/30 Ben Goertzel <[EMAIL PROTECTED]>:
>>> Hi,
>>>
 I have proposed a problem domain called "function predictor" whose
 purpose is to allow an AI to learn across problem sub-domains,
 carrying its learning from one domain to another. (See
 http://www.includipedia.com/wiki/User:Cabalamat/Function_predictor )

 I also think it would be useful if there was a regular (maybe annual)
 competition in the function predictor domain (or some similar domain).
 A bit like the Loebner Prize, except that it would be more useful to
 the advancement of AI, since the Loebner prize is silly.

 --
 Philip Hunt, <[EMAIL PROTECTED]>
>>>
>>> How does that differ from what is generally called "transfer learning" ?
>>
>> I don't think it does differ. ("Transfer learning" is not a term I'd
>> previously come across).
>>
>> --
>> Philip Hunt, <[EMAIL PROTECTED]>
>> Please avoid sending me Word or PowerPoint attachments.
>> See http://www.gnu.org/philosophy/no-word-attachments.html
>>
>>
>> ---
>> agi
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> Modify Your Subscription: https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>>
>
>
>
> --
> Ben Goertzel, PhD
> CEO, Novamente LLC and Biomind LLC
> Director of Research, SIAI
> [EMAIL PROTECTED]
>
> "I intend to live forever, or die trying."
> -- Groucho Marx
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Mushed Up Decision Processes

2008-11-30 Thread Stephen Reed
Ben,

Cycorp participated in the DARPA Transfer Learning project, as a subcontractor. 
  My project role was simply a team member and I did not attend any PI 
meetings.  But I did work on getting a Quake III Arena environment working at 
Cycorp which was to be a transfer learning testbed.   I also enhanced Cycorp's 
Java application that gathered facts from the web using the Google API.

Regarding winning a DARPA contract, I believe that teaming with an established 
contractor, e.g. SAIC, SRI, is beneficial.

 
Cheers,
-Steve

Stephen L. Reed


Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860





From: Ben Goertzel <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Sunday, November 30, 2008 10:17:44 AM
Subject: Re: [agi] Mushed Up Decision Processes

There was a DARPA program on "transfer learning" a few years back ...
I believe I applied and got rejected (with perfect marks on the
technical proposal, as usual ...) ... I never checked to see who got
the $$ and what they did with it...

ben g

On Sun, Nov 30, 2008 at 11:12 AM, Philip Hunt <[EMAIL PROTECTED]> wrote:
> 2008/11/30 Ben Goertzel <[EMAIL PROTECTED]>:
>> Hi,
>>
>>> I have proposed a problem domain called "function predictor" whose
>>> purpose is to allow an AI to learn across problem sub-domains,
>>> carrying its learning from one domain to another. (See
>>> http://www.includipedia.com/wiki/User:Cabalamat/Function_predictor )
>>>
>>> I also think it would be useful if there was a regular (maybe annual)
>>> competition in the function predictor domain (or some similar domain).
>>> A bit like the Loebner Prize, except that it would be more useful to
>>> the advancement of AI, since the Loebner prize is silly.
>>>
>>> --
>>> Philip Hunt, <[EMAIL PROTECTED]>
>>
>> How does that differ from what is generally called "transfer learning" ?
>
> I don't think it does differ. ("Transfer learning" is not a term I'd
> previously come across).
>
> --
> Philip Hunt, <[EMAIL PROTECTED]>
> Please avoid sending me Word or PowerPoint attachments.
> See http://www.gnu.org/philosophy/no-word-attachments.html
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"I intend to live forever, or die trying."
-- Groucho Marx


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Mushed Up Decision Processes

2008-11-30 Thread Stephen Reed
Matt Taylor was also an intern at Cycorp where was on Cycorp's Transfer 
Learning team with me.
-Steve

 Stephen L. Reed


Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860





From: Pei Wang <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Sunday, November 30, 2008 10:48:59 AM
Subject: Re: [agi] Mushed Up Decision Processes

On Sun, Nov 30, 2008 at 11:17 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> There was a DARPA program on "transfer learning" a few years back ...
> I believe I applied and got rejected (with perfect marks on the
> technical proposal, as usual ...) ... I never checked to see who got
> the $$ and what they did with it...

See http://www.cs.utexas.edu/~mtaylor/Publications/AGI08-taylor.pdf

Pei

> ben g
>
> On Sun, Nov 30, 2008 at 11:12 AM, Philip Hunt <[EMAIL PROTECTED]> wrote:
>> 2008/11/30 Ben Goertzel <[EMAIL PROTECTED]>:
>>> Hi,
>>>
>>>> I have proposed a problem domain called "function predictor" whose
>>>> purpose is to allow an AI to learn across problem sub-domains,
>>>> carrying its learning from one domain to another. (See
>>>> http://www.includipedia.com/wiki/User:Cabalamat/Function_predictor )
>>>>
>>>> I also think it would be useful if there was a regular (maybe annual)
>>>> competition in the function predictor domain (or some similar domain).
>>>> A bit like the Loebner Prize, except that it would be more useful to
>>>> the advancement of AI, since the Loebner prize is silly.
>>>>
>>>> --
>>>> Philip Hunt, <[EMAIL PROTECTED]>
>>>
>>> How does that differ from what is generally called "transfer learning" ?
>>
>> I don't think it does differ. ("Transfer learning" is not a term I'd
>> previously come across).
>>
>> --
>> Philip Hunt, <[EMAIL PROTECTED]>
>> Please avoid sending me Word or PowerPoint attachments.
>> See http://www.gnu.org/philosophy/no-word-attachments.html
>>
>>
>> ---
>> agi
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> Modify Your Subscription: https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>>
>
>
>
> --
> Ben Goertzel, PhD
> CEO, Novamente LLC and Biomind LLC
> Director of Research, SIAI
> [EMAIL PROTECTED]
>
> "I intend to live forever, or die trying."
> -- Groucho Marx
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Mushed Up Decision Processes

2008-11-30 Thread Ben Goertzel
>
> Regarding winning a DARPA contract, I believe that teaming with an
> established contractor, e.g. SAIC, SRI, is beneficial.
>
> Cheers,
> -Steve

Yeah, I've tried that approach too ...

As it happens, I've had significant more success getting funding from
various other government agencies ... but DARPA has been the *least*
favorable toward my work of any of them I've tried to deal with

It seems that, in the 5 years I've been applying for such grants,
DARPA hasn't happened to have a program manager whose particular taste
in AI is compatible with mine...

-- Ben G


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Mushed Up Decision Processes

2008-11-30 Thread Pei Wang
Stephen,

Does that mean what you did at Cycorp on transfer learning is similar
to what Taylor presented to AGI-08?

Pei

On Sun, Nov 30, 2008 at 1:01 PM, Stephen Reed <[EMAIL PROTECTED]> wrote:
> Matt Taylor was also an intern at Cycorp where was on Cycorp's Transfer
> Learning team with me.
> -Steve
>
> Stephen L. Reed
>
> Artificial Intelligence Researcher
> http://texai.org/blog
> http://texai.org
> 3008 Oak Crest Ave.
> Austin, Texas, USA 78704
> 512.791.7860
>
> 
> From: Pei Wang <[EMAIL PROTECTED]>
> To: agi@v2.listbox.com
> Sent: Sunday, November 30, 2008 10:48:59 AM
> Subject: Re: [agi] Mushed Up Decision Processes
>
> On Sun, Nov 30, 2008 at 11:17 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>> There was a DARPA program on "transfer learning" a few years back ...
>> I believe I applied and got rejected (with perfect marks on the
>> technical proposal, as usual ...) ... I never checked to see who got
>> the $$ and what they did with it...
>
> See http://www.cs.utexas.edu/~mtaylor/Publications/AGI08-taylor.pdf
>
> Pei
>
>> ben g
>>
>> On Sun, Nov 30, 2008 at 11:12 AM, Philip Hunt <[EMAIL PROTECTED]>
>> wrote:
>>> 2008/11/30 Ben Goertzel <[EMAIL PROTECTED]>:
>>>> Hi,
>>>>
>>>>> I have proposed a problem domain called "function predictor" whose
>>>>> purpose is to allow an AI to learn across problem sub-domains,
>>>>> carrying its learning from one domain to another. (See
>>>>> http://www.includipedia.com/wiki/User:Cabalamat/Function_predictor )
>>>>>
>>>>> I also think it would be useful if there was a regular (maybe annual)
>>>>> competition in the function predictor domain (or some similar domain).
>>>>> A bit like the Loebner Prize, except that it would be more useful to
>>>>> the advancement of AI, since the Loebner prize is silly.
>>>>>
>>>>> --
>>>>> Philip Hunt, <[EMAIL PROTECTED]>
>>>>
>>>> How does that differ from what is generally called "transfer learning" ?
>>>
>>> I don't think it does differ. ("Transfer learning" is not a term I'd
>>> previously come across).
>>>
>>> --
>>> Philip Hunt, <[EMAIL PROTECTED]>
>>> Please avoid sending me Word or PowerPoint attachments.
>>> See http://www.gnu.org/philosophy/no-word-attachments.html
>>>
>>>
>>> ---
>>> agi
>>> Archives: https://www.listbox.com/member/archive/303/=now
>>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>>> Modify Your Subscription: https://www.listbox.com/member/?&;
>>> Powered by Listbox: http://www.listbox.com
>>>
>>
>>
>>
>> --
>> Ben Goertzel, PhD
>> CEO, Novamente LLC and Biomind LLC
>> Director of Research, SIAI
>> [EMAIL PROTECTED]
>>
>> "I intend to live forever, or die trying."
>> -- Groucho Marx
>>
>>
>> ---
>> agi
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> Modify Your Subscription: https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>
> 
> agi | Archives | Modify Your Subscription


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Mushed Up Decision Processes

2008-11-30 Thread Stephen Reed
Pei,
Matt Taylor's work at Cycorp was not closely related to his published work at 
AGI-08.

Matt contributed to a variety of other Transfer Learning tasks, and I cannot 
recall exactly what those were.  
-Steve

 Stephen L. Reed


Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860





From: Pei Wang <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Sunday, November 30, 2008 12:16:41 PM
Subject: Re: [agi] Mushed Up Decision Processes

Stephen,

Does that mean what you did at Cycorp on transfer learning is similar
to what Taylor presented to AGI-08?

Pei

On Sun, Nov 30, 2008 at 1:01 PM, Stephen Reed <[EMAIL PROTECTED]> wrote:
> Matt Taylor was also an intern at Cycorp where was on Cycorp's Transfer
> Learning team with me.
> -Steve
>
> Stephen L. Reed
>
> Artificial Intelligence Researcher
> http://texai.org/blog
> http://texai.org
> 3008 Oak Crest Ave.
> Austin, Texas, USA 78704
> 512.791.7860
>
> 
> From: Pei Wang <[EMAIL PROTECTED]>
> To: agi@v2.listbox.com
> Sent: Sunday, November 30, 2008 10:48:59 AM
> Subject: Re: [agi] Mushed Up Decision Processes
>
> On Sun, Nov 30, 2008 at 11:17 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>> There was a DARPA program on "transfer learning" a few years back ...
>> I believe I applied and got rejected (with perfect marks on the
>> technical proposal, as usual ...) ... I never checked to see who got
>> the $$ and what they did with it...
>
> See http://www.cs.utexas.edu/~mtaylor/Publications/AGI08-taylor.pdf
>
> Pei
>
>> ben g
>>
>> On Sun, Nov 30, 2008 at 11:12 AM, Philip Hunt <[EMAIL PROTECTED]>
>> wrote:
>>> 2008/11/30 Ben Goertzel <[EMAIL PROTECTED]>:
>>>> Hi,
>>>>
>>>>> I have proposed a problem domain called "function predictor" whose
>>>>> purpose is to allow an AI to learn across problem sub-domains,
>>>>> carrying its learning from one domain to another. (See
>>>>> http://www.includipedia.com/wiki/User:Cabalamat/Function_predictor )
>>>>>
>>>>> I also think it would be useful if there was a regular (maybe annual)
>>>>> competition in the function predictor domain (or some similar domain).
>>>>> A bit like the Loebner Prize, except that it would be more useful to
>>>>> the advancement of AI, since the Loebner prize is silly.
>>>>>
>>>>> --
>>>>> Philip Hunt, <[EMAIL PROTECTED]>
>>>>
>>>> How does that differ from what is generally called "transfer learning" ?
>>>
>>> I don't think it does differ. ("Transfer learning" is not a term I'd
>>> previously come across).
>>>
>>> --
>>> Philip Hunt, <[EMAIL PROTECTED]>
>>> Please avoid sending me Word or PowerPoint attachments.
>>> See http://www.gnu.org/philosophy/no-word-attachments.html
>>>
>>>
>>> ---
>>> agi
>>> Archives: https://www.listbox.com/member/archive/303/=now
>>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>>> Modify Your Subscription: https://www.listbox.com/member/?&;
>>> Powered by Listbox: http://www.listbox.com
>>>
>>
>>
>>
>> --
>> Ben Goertzel, PhD
>> CEO, Novamente LLC and Biomind LLC
>> Director of Research, SIAI
>> [EMAIL PROTECTED]
>>
>> "I intend to live forever, or die trying."
>> -- Groucho Marx
>>
>>
>> ---
>> agi
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> Modify Your Subscription: https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>
> 
> agi | Archives | Modify Your Subscription


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Mushed Up Decision Processes

2008-11-30 Thread Jim Bromer
I realized that my idea of declarative-like statements could refer to
statistical objects and methods as well.  In fact, if they were to
provide the sort of efficacy I want for them, some would have to.  I
am not specifically talking about mixing logic with probability
theory.

Thanks for the comments.  I figured that probability methods in AGI
would suffer from combinatorial problems, but I hadn't talked to
anyone who actually took it to that level.
Jim Bromer

On Sat, Nov 29, 2008 at 9:21 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> Whether an AI needs to explicitly manipulate declarative statements is
> a deep question ... it may be that other dynamics that are in some
> contexts implicitly equivalent to this sort of manipulation will
> suffice
>
> But anyway, there is no contradiction between manipulating explicit
> declarative statements and using probability theory.
>
> Some of my colleagues and I spent a bunch of time during the last few
> years figuring out nice ways to combine probability theory and formal
> logic.  In fact there are "Progic" workshops every year exploring
> these sorts of themes.
>
> So, while the mainstream of probability-focused AI theorists aren't
> doing hard-core probabilistic logic, some researchers certainly are...
>
> I've been displeased with the wimpiness of the progic subfield, and
> its lack of contribution to areas like inference with nested
> quantifiers, and intensional inference ... and I've tried to remedy
> these shortcomings with PLN (Probabilistic Logic Networks) ...
>
> So, I think it's correct to criticize the mainstream of
> probability-focused AI theorists for not doing AGI ;-) ... but I don't
> think they've overlooking basic issues like overfitting and such ... I
> think they're just focusing on relatively easy problems where (unlike
> if you want to do explicitly probability theory based AGI) you don't
> need to merge probability theory with complex logical constructs...
>
> ben
>
> On Sat, Nov 29, 2008 at 9:15 PM, Jim Bromer <[EMAIL PROTECTED]> wrote:
>> In response to my message, where I said,
>> "What is wrong with the AI-probability group mind-set is that very few
>> of its proponents ever consider the problem of statistical ambiguity
>> and its obvious consequences."
>> Abram noted,
>> "The "AI-probability group" definitely considers such problems.
>> There is a large body of literature on avoiding overfitting, ie,
>> finding patterns that work for more then just the data at hand."
>>
>> Suppose I responded with a remark like,
>> 6341/6344 wrong Abram...
>>
>> A remark like this would be absurd because it lacks reference,
>> explanation and validity while also presenting a comically false
>> numerical precision for its otherwise inherent meaninglessness.
>>
>> Where does the ratio 6341/6344 come from?  I did a search in ListBox
>> of all references to the word "overfitting" made in 2008 and found
>> that out of 6344 messages only 3 actually involved the discussion of
>> the word before Abram mentioned it today.  (I don't know how good
>> ListBox is for this sort of thing).
>>
>> So what is wrong with my conclusion that Abram was 6341/6344 wrong?
>> Lots of things and they can all be described using declarative
>> statements.
>>
>> First of all the idea that the conversations in this newsgroup
>> represent an adequate sampling of all ai-probability enthusiasts is
>> totally ridiculous.  Secondly, Abram's mention of overfitting was just
>> one example of how the general ai-probability community is aware of
>> the problem that I mentioned.  So while my statistical finding may be
>> tangentially relevant to the discussion, the presumption that it can
>> serve as a numerical evaluation of Abram's 'wrongness' in his response
>> is so absurd that it does not merit serious consideration.  My
>> skepticism then concerns the question of just how would a fully
>> automated AGI program that relied fully on probability methods be able
>> to avoid getting sucked into the vortex of such absurd mushy reasoning
>> if it wasn't also able to analyze the declarative inferences of its
>> application of statistical methods?
>>
>> I believe that an AI program that is to be capable of advanced AGI has
>> to be capable of declarative assessment to work with any other
>> mathematical methods of reasoning it is programmed with.
>>
>> The ability to reason about declarative knowledge does not necessarily
>> have to be done in text or something like that.  That is not what I
>> mean.  What I really mean is that an effective AI program is going to
>> have to be capable of some kind of referential analysis of events in
>> the IO data environment using methods other than probability.  But if
>> it is to attain higher intellectual functions it has to be done in a
>> creative and imaginative way.
>>
>> Just as human statisticians have to be able to express and analyze the
>> application of their statistical methods using declarative statements
>> that refer to the data subject fie

Re: [agi] Mushed Up Decision Processes

2008-11-30 Thread Jim Bromer
Charles,
I don't agree with the details, but I do agree that something that is
effectively similar to your description does play a role.  I seem to
pick a few words at a time which are following some simple plan, and
yes they do go through some filters.  But I think I am also selecting
words as well.  And although I don't feel that I have selected an
emotional tone, there is no doubt that I am operating within one.  So,
while I disagree with some of the mechanics that you suggested there
is no question that something like that is going on.  I wonder if I am
actually doing my thinking one or two words at a time!  If so then our
ability to articulate any ideas are based on some kind of coincidence,
like driving down very familiar streets.  You know if something like
that was true, you should be able to write a program to detect strong
general patterns in peoples sentences.  These patterns would be partly
based on syntactic classifiers and partly based on two or three word
sequences.  This would not show up when drawn from different peoples
writing but it should show up when examining one person's writing at a
time.  I mean you could find new kinds of insights from that kind of
study.
Jim Bromer

On Sat, Nov 29, 2008 at 2:27 PM, Charles Hixson
<[EMAIL PROTECTED]> wrote:
> A response to:
>
> "I wondered why anyone would deface the
> expression of his own thoughts with an emotional and hostile message,"
>
> My theory is that thoughts are generated internally and forced into words
> via a babble generator.  Then the thoughts are filtered through a screen to
> remove any that don't match ones intent, that don't make sense, etc.  The
> value assigned to each expression is initially dependent on how well it
> expresses one's emotional tenor.
>
> Therefore I would guess that all of the verbalizations that the individual
> generated which passed the first screen were hostile in nature.  From the
> remaining sample he filtered those which didn't generate sensible-to-him
> scenarios when fed back into his world model.  This left him with a much
> reduced selection of phrases to choose from when composing his response.
>
> In my model this happens a phrase at a time rather than a sentence at a
> time.  And there is also a probabilistic element where each word has a
> certain probability of being followed by divers other words.  I often don't
> want to express the most likely probability, as by choosing a less
> frequently chosen alternative I (believe I) create the impression a more
> studied, i.e. thoughtful, response.  But if one wishes to convey a more
> dynamic style then one would choose a more likely follower.
>
> Note that in this scenario phrases are generated both randomly and in
> parallel.  Then they are selected for fitness for expression by passing
> through various filter.
>
> Reasonable?
>
>
> Jim Bromer wrote:
>>
>> Hi.  I will just make a quick response to this message and then I want
>> to think about the other messages before I reply.
>>
>> A few weeks ago I decided that I would write a criticism of
>> ai-probability to post to this group.  I wasn't able remember all of
>> my criticisms so I decided to post a few preliminary sketches to
>> another group.  I wasn't too concerned about how they responded, and
>> in fact I thought they would just ignore me.  The first response I got
>> was from an irate guy who was quite unpleasant and then finished by
>> declaring that I slandered the entire ai-probability community!  He
>> had some reasonable criticisms about this but I considered the issue
>> tangential to the central issue I wanted to discuss. I would have
>> responded to his more reasonable criticisms if they hadn't been
>> embedded in his enraged rant.  I wondered why anyone would deface the
>> expression of his own thoughts with an emotional and hostile message,
>> so I wanted to try the same message on this group to see if anyone who
>> was more mature would focus on this same issue.
>>
>> Abram made a measured response but his focus was on the
>> over-generalization.  As I said, this was just a preliminary sketch of
>> a message that I intended to post to this group after I had worked on
>> it.
>>
>> Your point is taken.  Norvig seems to say that overfitting is a
>> general problem.  The  method given to study the problem is
>> probabilistic but it is based on the premise that the original data is
>> substantially intact.  But Norvig goes on to mention that with pruning
>> noise can be tolerated. If you read my message again you may see that
>> my central issue was not really centered on the issue of whether
>> anyone in the ai-probability community was aware of the nature of the
>> science of statistics but whether or not probability can be used as
>> the fundamental basis to create agi given the complexities of the
>> problem.  So while your example of overfitting certainly does deflate
>> my statements that no one in the ai-probability community gets this
>> stuff, it does not actually addres

Re: [agi] Mushed Up Decision Processes

2008-11-30 Thread Jim Bromer
Ed,
I think that we must rely on large collections of relatively simple
patterns that are somehow capable of being mixed and used in
interactions with the others.  These interacting patterns (to use your
term) would have extensive variations to make them flexible and useful
with other patterns.

When we learn that national housing prices did not provide us with the
kind of detail that we needed we go and figure other ways to find data
that showed some of the variations that would have helped us to
prepare better for a situation like the one we are currently in.

I was thinking of that exact example when I wrote about mushy decision
making, because the national average price would be more mushy than
the regional prices, or a multiple price level index.  The mush index
of an index does not mean that the index is garbage, but since
something like this is derived from finer grained statistics, it
really exemplifies the problem.

My idea is that an agi program would have to go further than data
mining.  It would have to be able to shape its own use of statistics
in order to establish validity for itself.  I really feel that there
is something really important about the classifiers of statistical
methods that I just haven't grasped yet.  My example for this this
comes from statistics that are similar but just different enough so
that they don't mesh quite right.  Like two different marketing
surveys that provide similar information which is so close that a
marketer can draw conclusions from their combination but which aren't
actually close enough to justify this process.  Like asking different
representative groups if they are planning to buy a television in one
survey, and asking how much they think they will spend on appliances
during the next two years.  The two surveys are so close that you know
the results can be combined, but they are so different that it is
almost impossible to justify the combination in any reasonable way. If
I could only figure this one out I think the other problems I am
interested in would start to solve themselves.

Jim Bromer

On Sat, Nov 29, 2008 at 11:40 AM, Ed Porter <[EMAIL PROTECTED]> wrote:
> Jim
>
> My understanding is that a Novamente-like system would have a process of
> natural selection that tends to favor the retention and use of patterns
> (perceptive, cognative, behaviors) prove themselves useful in achieving
> goals in the word in which it is embodied.
>
> It seems to me t such a process of natural selection would tend to naturally
> put some sort of limit on how out-of-touch many of an AGI's patterns would
> be, at least with regard to patterns about things for which the AGI has had
> considerable experience from the world in which it is embodied.
>
> However, we humans often get pretty out of touch with real world
> probabilities, as the recent bubble in housing prices, and the commonly
> said, although historically inaccurate, statement of several years ago ---
> that housing prices never go down on a national --- shows.
>
> It would be helpful to make AGI's be a little more accurate in their
> evaluation of the evidence for many of their assumptions --- and what that
> evidence really says --- than we humans are.
>
> Ed Porter
>
> -Original Message-
> From: Jim Bromer [mailto:[EMAIL PROTECTED]
> Sent: Saturday, November 29, 2008 10:49 AM
> To: agi@v2.listbox.com
> Subject: [agi] Mushed Up Decision Processes
>
> One of the problems that comes with the casual use of analytical
> methods is that the user becomes inured to their habitual misuse. When
> a casual familiarity is combined with a habitual ignorance of the
> consequences of a misuse the user can become over-confident or
> unwisely dismissive of criticism regardless of how on the mark it
> might be.
>
> The most proper use of statistical and probabilistic methods is to
> base results on a strong association with the data that they were
> derived from.  The problem is that the AI community cannot afford this
> strong a connection to original source because they are trying to
> emulate the mind in some way and it is not reasonable to assume that
> the mind is capable of storing all data that it has used to derive
> insight.
>
> This is a problem any AI method has to deal with, it is not just a
> probability thing.  What is wrong with the AI-probability group
> mind-set is that very few of its proponents ever consider the problem
> of statistical ambiguity and its obvious consequences.
>
> All AI programmers have to consider the problem.  Most theories about
> the mind posit the use of similar experiences to build up theories
> about the world (or to derive methods to deal effectively with the
> world).  So even though the methods to deal with the data environment
> are detached from the original sources of those methods, they can
> still be reconnected by the examination of similar experiences that
> may subsequently occur.
>
> But still it is important to be able to recognize the significance and
> ne

Re: [agi] Mushed Up Decision Processes

2008-11-30 Thread J. Andrew Rogers


On Nov 30, 2008, at 7:31 AM, Philip Hunt wrote:

2008/11/30 Ben Goertzel <[EMAIL PROTECTED]>:


In general, the standard AI methods can't handle pattern recognition
problems requiring finding complex interdependencies among multiple
variables that are obscured among scads of other variables
The human mind seems to do this via building up intuition via drawing
analogies among multiple problems it confronts during its history.


Yes, so that people learn one problem, then it helps them to learn
other similar ones. Is there any AI software that does this? I'm not
aware of any.



To do this as a practical matter, you need to address *at least* two  
well-known hard-but-important unsolved algorithm problems in  
completely different areas of theoretical computer science that have  
nothing to do with AI per se.  That is no small hurdle, even if you  
are a bloody genius.


That said, I doubt most AI researchers could even tell you what those  
two big problems are which is, obliquely, the other part of the problem.




I have proposed a problem domain called "function predictor" whose
purpose is to allow an AI to learn across problem sub-domains,
carrying its learning from one domain to another. (See
http://www.includipedia.com/wiki/User:Cabalamat/Function_predictor )



In Feder/Merhav/Gutman's 1995 "Reflections on..." followup to their  
1992 paper on universal sequence prediction, they make the  
observation, which can be found at the following link, that it is  
probably useful to introduce the concept of "prediction error  
complexity" as an important metric which is similar to what you are  
talking about in the theoretical abstract:


http://www.itsoc.org/review/meir/node5.html

Our understanding of this area is better in 2008 than it was in 1995,  
but this is one of the earliest serious references to the idea in a  
theoretical way.  Somewhat obscure and primitive by current standards,  
but influential in the AIXI and related flavors of AI theory based on  
computational information theory. Or at least, I found it very  
interesting and useful a decade ago.


Cheers,

J. Andrew Rogers


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com