RE: [agi] Patterns and Automata

2008-07-16 Thread John G. Rose
> From: Pei Wang [mailto:[EMAIL PROTECTED]
> On Mon, Jul 7, 2008 at 12:49 AM, John G. Rose <[EMAIL PROTECTED]>
> wrote:
> >
> > In pattern recognition, are some patterns not expressible with
> automata?
> 
> I'd rather say "not easily/naturally expressible". Automata is not a
> popular technique in pattern recognition, compared to, say, NN. You
> may want to check out textbooks on PR, such as
> http://www.amazon.com/Pattern-Recognition-Learning-Information-
> Statistics/dp/0387310738/ref=pd_bbs_sr_2?ie=UTF8&s=books&qid=1215382348&
> sr=8-2
> 
> > The reason is ask is that I am trying to read sensory input using
> "automata
> > recognition". I hear a lot of discussion on pattern recognition and am
> > wondering if pattern recognition is the same as automata recognition.
> 
> Currently "pattern recognition" is a much more general category than
> "automata recognition".
> 


I am thinking of breaching the gap somewhat with automata recognition + CA
recognition. So automata as in automata, semiautomata, and automata w/o
action + CA recognition. But recognizing automata from data requires some
techniques that pattern recognition uses. Automata are easy to work with,
especially with visual data, as I'm trying to get to a general pattern
recognition automata subset equivalent.

I haven't heard of any profound general pattern recognition techniques so
I'm more comfortable attempting to derive my own functional model. I suspect
how existing pattern classification schemes work as they are ultimately
dependant on the mathematical systems used to describe them. And the space
of all patterns compared to the space of all probable patterns in this
universe... 

I'd be interested in books that study pattern processing across a complex
systems layer... or in this case automata processing just to get a
perspective on any potential computational complexity advantages.

John




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-16 Thread Richard Loosemore

Abram Demski wrote:

For what it is worth, I agree with Richard Loosemore in that your
first description was a bit ambiguous, and it sounded like you were
saying that backward chaining would add facts to the knowledge base,
which would be wrong. But you've cleared up the ambiguity.


I concur:  I was simply trying to clear up an ambiguity in the phrasing.



Richard Loosemore








On Wed, Jul 16, 2008 at 5:02 AM, Brad Paulsen <[EMAIL PROTECTED]> wrote:


Richard Loosemore wrote:

Brad Paulsen wrote:

I've been following this thread pretty much since the beginning.  I hope
I didn't miss anything subtle.  You'll let me know if I have, I'm sure. ;=)

It appears the need for temporal dependencies or different levels of
reasoning has been conflated with the terms "forward-chaining" (FWC) and
"backward-chaining" (BWC), which are typically used to describe different
rule base evaluation algorithms used by expert systems.

The terms "forward-chaining" and "backward-chaining" when used to refer
to reasoning strategies have absolutely nothing to do with temporal
dependencies or levels of reasoning.  These two terms refer simply, and
only, to the algorithms used to evaluate "if/then" rules in a rule base
(RB).  In the FWC algorithm, the "if" part is evaluated and, if TRUE, the
"then" part is added to the FWC engine's output.  In the BWC algorithm, the
"then" part is evaluated and, if TRUE, the "if" part is added to the BWC
engine's output.  It is rare, but some systems use both FWC and BWC.

That's it.  Period.  No other denotations or connotations apply.

Whooaa there.  Something not right here.

Backward chaining is about starting with a goal statement that you would
like to prove, but at the beginning it is just a hypothesis.  In BWC you go
about proving the statement by trying to find facts that might support it.
 You would not start from the statement and then add knowledge to your
knowledgebase that is consistent with it.


Richard,

I really don't know where you got the idea my descriptions or algorithm
added "...knowledge to your (the "royal" you, I presume) knowledgebase...".
Maybe you misunderstood my use of the term "output."  Another (perhaps
better) word for output would be "result" or "action."  I've also heard
FWC/BWC engine output referred to as the "blackboard."

By definition, an expert system rule base contains the total sum of the
knowledge of a human expert(s) in a particular domain at a given point in
time.  When you use it, that's what you expect to get.  You don't expect the
system to modify the rule base at runtime.  If everything you need isn't in
the rule base, you need to talk to the knowledge engineer. I don't know of
any expert system that adds rules to its rule base (i.e., becomes "more
expert") at runtime.  I'm not saying necessarily that this couldn't be done,
but I've never seen it.

I have more to say about your counterexample below, but I don't want
this thread to devolve into a critique of 1980's classic AI models.

The main purpose I posted to this thread was that I was seeing
inaccurate conclusions being drawn based on a lack of understanding
of how the terms "backward" and "forward" chaining related to temporal
dependencies and hierarchal logic constructs.  There is no relation.
Using forward chaining has nothing to do with "forward in time" or
"down a level in the hierarchy."  Nor does backward chaining have
anything to do with "backward in time" or "up a level in the hierarchy."
These terms describe particular search algorithms used in expert system
engines (since, at least, the mid-1980s).  Definitions vary in emphasis,
such as the three someone posted to this thread, but they all refer to
the same critters.

If one wishes to express temporal dependencies or hierarchical levels of
logic in these types of systems, one needs to encode these in the rules.
I believe I even gave an example of a rule base containing temporal and
hierarchical-conditioned rules.


So for example, if your goal is to prove that Socrates is mortal, then
your above desciption of BWC would cause the following to occur

1) Does any rule allow us to conclude that x is/is not mortal?

2) Answer: yes, the following rules allow us to do that:

"If x is a plant, then x is mortal"
"If x is a rock, then x is not mortal"
"If x is a robot, then x is not mortal"
"If x lives in a post-singularity era, then x is not mortal"
"If x is a slug, then x is mortal"
"If x is a japanese beetle, then x is mortal"
"If x is a side of beef, then x is mortal"
"If x is a screwdriver, then x is not mortal"
"If x is a god, then x is not mortal"
"If x is a living creature, then x is mortal"
"If x is a goat, then x is mortal"
"If x is a parrot in a Dead Parrot Sketch, then x is mortal"

3) Ask the knowledge base if Socrates is a plant, if Socrates is a rock,
etc etc . working through the above list.

3) [According to your version of BWC, if I understand you aright] Okay, if
we cannot find any facts in the KB that say that Socra

Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-16 Thread Abram Demski
The way I see it, on the expert systems front, bayesian networks
replaced the algorithms being currently discussed. These are more
flexible, since they are probabilistic, and also have associated
learning algorithms. For nonprobabilistic systems, the resolution
algorithm is more generally applicable (it deals with any logical
statement it is given, rather than only with if-then rules).
Resolution subsumes both forward and backward chaining; to forward
chain, we simply resolve statements in the database, but to backwards
chain, we add the negation of the query to the database and try to
derive a contradiction by resolving statements (thus proving the query
statement by reductio ad absurdum).

The most agi-oriented remnant of the rule-based system period is SOAR
(http://sitemaker.umich.edu/soar). It does add new rules to its
system, but they are summaries of old rules (to speed later
inference). SOAR recently added rienforcement learning capability, but
it doesn't use it to generate new rules  as far as I know.

On Wed, Jul 16, 2008 at 7:16 AM, Mike Tintner <[EMAIL PROTECTED]> wrote:
> Brad: By definition, an expert system rule base contains the total sum of
> the
> knowledge of a human expert(s) in a particular domain at a given point in
> time.  When you use it, that's what you expect to get.  You don't expect the
> system to modify the rule base at runtime.  If everything you need isn't in
> the rule base, you need to talk to the knowledge engineer. I don't know of
> any expert system that adds rules to its rule base (i.e., becomes "more
> expert") at runtime.  I'm not saying necessarily that this couldn't be done,
> but I've never seen it.
>
> In which case - (thanks BTW for a v. helpful post) - are we talking entirely
> here about narrow AI? Sorry if I've missed this, but has anyone been
> discussing how to provide a flexible, evolving set of rules for behaviour?
> That's the crux of AGI, isn't it? Something at least as flexible as a
> country's Constitution and  Body of Laws. What ideas are on offer here?
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-16 Thread Abram Demski
For what it is worth, I agree with Richard Loosemore in that your
first description was a bit ambiguous, and it sounded like you were
saying that backward chaining would add facts to the knowledge base,
which would be wrong. But you've cleared up the ambiguity.

On Wed, Jul 16, 2008 at 5:02 AM, Brad Paulsen <[EMAIL PROTECTED]> wrote:
>
>
> Richard Loosemore wrote:
>>
>> Brad Paulsen wrote:
>>>
>>> I've been following this thread pretty much since the beginning.  I hope
>>> I didn't miss anything subtle.  You'll let me know if I have, I'm sure. ;=)
>>>
>>> It appears the need for temporal dependencies or different levels of
>>> reasoning has been conflated with the terms "forward-chaining" (FWC) and
>>> "backward-chaining" (BWC), which are typically used to describe different
>>> rule base evaluation algorithms used by expert systems.
>>>
>>> The terms "forward-chaining" and "backward-chaining" when used to refer
>>> to reasoning strategies have absolutely nothing to do with temporal
>>> dependencies or levels of reasoning.  These two terms refer simply, and
>>> only, to the algorithms used to evaluate "if/then" rules in a rule base
>>> (RB).  In the FWC algorithm, the "if" part is evaluated and, if TRUE, the
>>> "then" part is added to the FWC engine's output.  In the BWC algorithm, the
>>> "then" part is evaluated and, if TRUE, the "if" part is added to the BWC
>>> engine's output.  It is rare, but some systems use both FWC and BWC.
>>>
>>> That's it.  Period.  No other denotations or connotations apply.
>>
>> Whooaa there.  Something not right here.
>>
>> Backward chaining is about starting with a goal statement that you would
>> like to prove, but at the beginning it is just a hypothesis.  In BWC you go
>> about proving the statement by trying to find facts that might support it.
>>  You would not start from the statement and then add knowledge to your
>> knowledgebase that is consistent with it.
>>
>
> Richard,
>
> I really don't know where you got the idea my descriptions or algorithm
> added "...knowledge to your (the "royal" you, I presume) knowledgebase...".
> Maybe you misunderstood my use of the term "output."  Another (perhaps
> better) word for output would be "result" or "action."  I've also heard
> FWC/BWC engine output referred to as the "blackboard."
>
> By definition, an expert system rule base contains the total sum of the
> knowledge of a human expert(s) in a particular domain at a given point in
> time.  When you use it, that's what you expect to get.  You don't expect the
> system to modify the rule base at runtime.  If everything you need isn't in
> the rule base, you need to talk to the knowledge engineer. I don't know of
> any expert system that adds rules to its rule base (i.e., becomes "more
> expert") at runtime.  I'm not saying necessarily that this couldn't be done,
> but I've never seen it.
>
> I have more to say about your counterexample below, but I don't want
> this thread to devolve into a critique of 1980's classic AI models.
>
> The main purpose I posted to this thread was that I was seeing
> inaccurate conclusions being drawn based on a lack of understanding
> of how the terms "backward" and "forward" chaining related to temporal
> dependencies and hierarchal logic constructs.  There is no relation.
> Using forward chaining has nothing to do with "forward in time" or
> "down a level in the hierarchy."  Nor does backward chaining have
> anything to do with "backward in time" or "up a level in the hierarchy."
> These terms describe particular search algorithms used in expert system
> engines (since, at least, the mid-1980s).  Definitions vary in emphasis,
> such as the three someone posted to this thread, but they all refer to
> the same critters.
>
> If one wishes to express temporal dependencies or hierarchical levels of
> logic in these types of systems, one needs to encode these in the rules.
> I believe I even gave an example of a rule base containing temporal and
> hierarchical-conditioned rules.
>
>> So for example, if your goal is to prove that Socrates is mortal, then
>> your above desciption of BWC would cause the following to occur
>>
>> 1) Does any rule allow us to conclude that x is/is not mortal?
>>
>> 2) Answer: yes, the following rules allow us to do that:
>>
>> "If x is a plant, then x is mortal"
>> "If x is a rock, then x is not mortal"
>> "If x is a robot, then x is not mortal"
>> "If x lives in a post-singularity era, then x is not mortal"
>> "If x is a slug, then x is mortal"
>> "If x is a japanese beetle, then x is mortal"
>> "If x is a side of beef, then x is mortal"
>> "If x is a screwdriver, then x is not mortal"
>> "If x is a god, then x is not mortal"
>> "If x is a living creature, then x is mortal"
>> "If x is a goat, then x is mortal"
>> "If x is a parrot in a Dead Parrot Sketch, then x is mortal"
>>
>> 3) Ask the knowledge base if Socrates is a plant, if Socrates is a rock,
>> etc etc . working through the above list.
>>
>> 3) [According 

Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-16 Thread Mike Tintner
Brad: By definition, an expert system rule base contains the total sum of 
the

knowledge of a human expert(s) in a particular domain at a given point in
time.  When you use it, that's what you expect to get.  You don't expect the
system to modify the rule base at runtime.  If everything you need isn't in
the rule base, you need to talk to the knowledge engineer. I don't know of
any expert system that adds rules to its rule base (i.e., becomes “more
expert”) at runtime.  I'm not saying necessarily that this couldn't be done,
but I've never seen it.

In which case - (thanks BTW for a v. helpful post) - are we talking entirely 
here about narrow AI? Sorry if I've missed this, but has anyone been 
discussing how to provide a flexible, evolving set of rules for behaviour? 
That's the crux of AGI, isn't it? Something at least as flexible as a 
country's Constitution and  Body of Laws. What ideas are on offer here? 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY "THE BINDING PROBLEM"?

2008-07-16 Thread Brad Paulsen



Richard Loosemore wrote:

Brad Paulsen wrote:
I've been following this thread pretty much since the beginning.  I 
hope I didn't miss anything subtle.  You'll let me know if I have, I'm 
sure. ;=)


It appears the need for temporal dependencies or different levels of 
reasoning has been conflated with the terms "forward-chaining" (FWC) 
and "backward-chaining" (BWC), which are typically used to describe 
different rule base evaluation algorithms used by expert systems.


The terms “forward-chaining” and “backward-chaining” when used to 
refer to reasoning strategies have absolutely nothing to do with 
temporal dependencies or levels of reasoning.  These two terms refer 
simply, and only, to the algorithms used to evaluate “if/then” rules 
in a rule base (RB).  In the FWC algorithm, the “if” part is evaluated 
and, if TRUE, the “then” part is added to the FWC engine's output.  In 
the BWC algorithm, the “then” part is evaluated and, if TRUE, the “if” 
part is added to the BWC engine's output.  It is rare, but some 
systems use both FWC and BWC.


That's it.  Period.  No other denotations or connotations apply.


Whooaa there.  Something not right here.

Backward chaining is about starting with a goal statement that you would 
like to prove, but at the beginning it is just a hypothesis.  In BWC you 
go about proving the statement by trying to find facts that might 
support it.  You would not start from the statement and then add 
knowledge to your knowledgebase that is consistent with it.




Richard,

I really don't know where you got the idea my descriptions or algorithm
added “...knowledge to your (the “royal” you, I presume) knowledgebase...”.
Maybe you misunderstood my use of the term “output.”  Another (perhaps
better) word for output would be “result” or “action.”  I've also heard
FWC/BWC engine output referred to as the “blackboard.”

By definition, an expert system rule base contains the total sum of the
knowledge of a human expert(s) in a particular domain at a given point in
time.  When you use it, that's what you expect to get.  You don't expect the
system to modify the rule base at runtime.  If everything you need isn't in
the rule base, you need to talk to the knowledge engineer. I don't know of
any expert system that adds rules to its rule base (i.e., becomes “more
expert”) at runtime.  I'm not saying necessarily that this couldn't be done,
but I've never seen it.

I have more to say about your counterexample below, but I don't want
this thread to devolve into a critique of 1980's classic AI models.

The main purpose I posted to this thread was that I was seeing
inaccurate conclusions being drawn based on a lack of understanding
of how the terms “backward” and “forward” chaining related to temporal
dependencies and hierarchal logic constructs.  There is no relation.
Using forward chaining has nothing to do with “forward in time” or
“down a level in the hierarchy.”  Nor does backward chaining have
anything to do with “backward in time” or “up a level in the hierarchy.”
These terms describe particular search algorithms used in expert system
engines (since, at least, the mid-1980s).  Definitions vary in emphasis,
such as the three someone posted to this thread, but they all refer to
the same critters.

If one wishes to express temporal dependencies or hierarchical levels of
logic in these types of systems, one needs to encode these in the rules.
I believe I even gave an example of a rule base containing temporal and
hierarchical-conditioned rules.

So for example, if your goal is to prove that Socrates is mortal, then 
your above desciption of BWC would cause the following to occur


1) Does any rule allow us to conclude that x is/is not mortal?

2) Answer: yes, the following rules allow us to do that:

"If x is a plant, then x is mortal"
"If x is a rock, then x is not mortal"
"If x is a robot, then x is not mortal"
"If x lives in a post-singularity era, then x is not mortal"
"If x is a slug, then x is mortal"
"If x is a japanese beetle, then x is mortal"
"If x is a side of beef, then x is mortal"
"If x is a screwdriver, then x is not mortal"
"If x is a god, then x is not mortal"
"If x is a living creature, then x is mortal"
"If x is a goat, then x is mortal"
"If x is a parrot in a Dead Parrot Sketch, then x is mortal"

3) Ask the knowledge base if Socrates is a plant, if Socrates is a rock, 
etc etc . working through the above list.


3) [According to your version of BWC, if I understand you aright] Okay, 
if we cannot find any facts in the KB that say that Socrates is known to 
be one of these things, then add the first of these to the KB:


"Socrates is a plant"

[This is the bit that I question:  we don't do the opposite of forward 
chaining at this step].


4) Now repeat to find all rules that allow us to conclude that x is a 
plant".  For this set of " ... then x is a plant" rules, go back and 
repeat the loop from step 2 onwards.  Then if this does not work, 



Well, yo