> #####ED PORTERS CURRENT RESPONSE ########>
> Forward and backward chaining are not hacks.  They has been two of the most
> commonly and often successfully techniques in AI search for at least 30
> years.  They  are not some sort of wave of the hand.  They are much more
> concretely grounded in successful AI experience than many of your much more
> ethereal, and very arguably hand waving, statements about having many of the
> difficult problems in AI are to be cured by some as yet unclearly defined
> emergence from complexity.

Richard Loosemore's response:
Oh dear:  yet again I have to turn a blind eye to the ad hominem insults.
----------------------------------------------

There were no ad hominem insults in Ed's response. His comment about Richard's 
ethereal hand waiving was clearly and unmistakably within the boundaries that 
Richard has set in his own criticisms again and again.  And Ed specified the 
target of the criticism when he spoke of the "difficult problems in AI 
...[which]... are to be cured by some as yet unclearly defined emergence from 
complexity."  All Richard had to do was to answer the question, and instead he 
ran for cover behind this bogus charge of being the victim of an ad hominem 
insult.

If upon reflection, Richard sincerely believes that Ed's comment was an ad 
hominem insult, then we can take this comment as a basis for detecting the true 
motivation behind those comments of Richard which are so similar in form.

For example, Richard said, " Understanding that they only have the status of 
hacks is a very  important sign of maturity as an AI researcher.  There is a 
very deep truth buried in that fact."

While I have some partial agreement with Richard's side on this one particular 
statement, I can only conclude that by using Richard's own measure of "ad 
hominem insults" that Richard must have intended this remark to have that kind 
of effect.  Similarly, I feel comfortable with the conclusion that every time 
Richard uses his "hand waiving" argument, there is a good chance that he is 
just using it as an all-purpose ad hominem insult.

It is too bad that Richard cannot discuss his complexity theory without running 
from the fact that his solution to the problem is based on his non-explanation 
that, 
"...in this "emergent" (or, to be precise, "complex system") answer to 
the question, there is no guarantee that binding will happen.  The 
binding problem in effect disappears - it does not need to be explicitly 
solved because it simply never arises.  There is no specific mechanism 
designed to construct bindings (although there are lots of small 
mechanisms that enforce constraints), there is only a general style of 
computation, which is the relaxation-of-constraints style."

>From reading Richard's postings I think that Richard does not believe there is 
>a problem because the nature of complexity itself will solve the problem - 
>once someone is lucky enough to find the right combination of initial rules.

For those who believe that problems are solved through study and 
experimentation, Richard has no response to the most difficult problems in 
contemporary AI research except to cry foul.  He does not even consider such 
questions to be valid.

Jim Bromer



      


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to