Thomas Hruska wrote:
> I don't have any examples.  A.I. is a bogus term which basically means 
> "wing it until you have something that passes for reality".  People 
> typically think "neural networks" when it comes to A.I. but neural 
> networks are simply statistical engines that have been trained on a 
> limited set of data and are only semi-useful - the more inputs into a 
> neural net, the less useful it becomes.
>   

I have played around with neural networks and attempted to design 
systems that could make decisions on their own, not just strict if/else 
logic. What I found is this.

1. Neural networks are not good for A.I. They are good for solving very 
specific problems where the algorithm is not obvious, or there is no NP 
solution available: neural networks can prune the solution space such 
that while the problem may still not be NP complete, it can bring the 
"real" time to reasonable levels (cheating death as I think of it, big 
Oh be damned).

2. A.I. is extremely difficult, and generally only possible for people 
who are not Ph.D.s working at research universities if we simplify the 
decisions the computer can take. In your previous example, if you were 
programming a basketball game, you have a finite set of actions such as 
shoot, pass, move, rebound, etc. and even then you may shoot a 3 
pointer, go in for a layup, dunk, etc. Typically there is some sort of 
probability involved, where the computer gauges the probability of 
succeeding at each action and tosses a loaded die with the safer options 
being more likely.

While that example may not truly be A.I., it is programming autonomous 
decision logic into the computer. Instead of hard-coding the actions, it 
is enabling it to figure out risk factors and determine on its own which 
action to take. It seems to work well so far, but only for relatively 
simple problems.

-- 
John Gaughan
http://www.jtgprogramming.org/

Reply via email to