>         
>                 An Update....
>                 
>                 I think the following gets to the heart of general AI
>                 and what it takes to achieve it. It also provides us
>                 with evidence as to why general AI is so difficult.
>                 With this new knowledge in mind, I think I will be
>                 much more capable now of solving the problems and
>                 making it work. 



>                 
>                 I've come to the conclusion lately that the best
>                 hypothesis is better because it is more predictive and
>                 then simpler than other hypotheses (in that order....
>                 more predictive... then simpler). But, I am amazed at
>                 how difficult it is to quantitatively define more
>                 predictive and simpler for specific problems. This is
>                 why I have sometimes doubted the truth of the
>                 statement.

Hi,
I disagree. It's a balance. Sometimes simpler is better, sometimes "more
predictive" is better. Simpler can be better because the decrease in
computation time, hence, sometimes you want to solve things quickly. 
>                 
>                 In addition, the observations that the AI gets are not
>                 representative of all observations! This means that if
>                 your measure of "predictiveness" depends on the number
>                 of certain observations, it could make mistakes! So,
>                 the specific observations you are aware of may be
>                 unrepresentative of the predictiveness of a hypothesis
>                 relative to the truth. If you try to calculate which
>                 hypothesis is more predictive and you don't have the
>                 critical observations that would give you the right
>                 answer, you may get the wrong answer! This all depends
>                 of course on your method of calculation, which is
>                 quite elusive to define. 
>                 
>                 Visual input from screenshots, for example, can be
>                 somewhat malicious. Things can move, appear, disappear
>                 or occlude each other suddenly. So, without sufficient
>                 knowledge it is hard to decide whether matches you
>                 find between such large changes are because it is the
>                 same object or a different object. This may indicate
>                 that bias and preprogrammed experience should be
>                 introduced to the AI before training. Either that or
>                 the training inputs should be carefully chosen to
>                 avoid malicious input and to make them nice for
>                 learning. 
>                 
>                 This is the "correspondence problem" that is typical
>                 of computer vision and has never been properly solved.
>                 Such malicious input also makes it difficult to learn
>                 automatically because the AI doesn't have sufficient
>                 experience to know which changes or transformations
>                 are acceptable and which are not. It is immediately
>                 bombarded with malicious inputs.
>                 
>                 I've also realized that if a hypothesis is more
>                 "explanatory", it may be better. But quantitatively
>                 defining explanatory is also elusive and truly depends
>                 on the specific problems you are applying it to
>                 because it is a heuristic. It is not a true measure of
>                 correctness. It is not loyal to the truth. "More
>                 explanatory" is really a heuristic that helps us find
>                 hypothesis that are more predictive. The true measure
>                 of whether a hypothesis is better is simply the most
>                 accurate and predictive hypothesis. That is the
>                 ultimate and true measure of correctness.
>                 
>                 Also, since we can't measure every possible prediction
>                 or every last prediction (and we certainly can't
>                 predict everything), our measure of predictiveness
>                 can't possibly be right all the time! We have no
>                 choice but to use a heuristic of some kind.
>                 
>                 So, its clear to me that the right hypothesis is "more
>                 predictive and then simpler". But, it is also clear
>                 that there will never be a single measure of this that
>                 can be applied to all problems. I hope to eventually
>                 find a nice model for how to apply it to different
>                 problems though. This may be the reason that so many
>                 people have tried and failed to develop general AI.
>                 Yes, there is a solution. But there is no silver
>                 bullet that can be applied to all problems. Some
>                 methods are better than others. But I think another
>                 major reason of the failures is that people think they
>                 can predict things without sufficient information. By
>                 approaching the problem this way, we compound the need
>                 for heuristics and the errors they produce because we
>                 simply don't have sufficient information to make a
>                 good decision with limited evidence. If approached
>                 correctly, the right solution would solve many more
>                 problems with the same efforts than a poor solution
>                 would. It would also eliminate some of the
>                 difficulties we currently face if sufficient data is
>                 available to learn from.
>                 
>                 In addition to all this theory about better
>                 hypotheses, you have to add on the need to solve
>                 problems in reasonable time. This also compounds the
>                 difficulty of the problem and the complexity of
>                 solutions.
>                 
>                 I am always fascinated by the extraordinary difficulty
>                 and complexity of this problem. The more I learn about
>                 it, the more I appreciate it.
>                 
>                 Dave
>                 agi | Archives  | Modify
>                 Your Subscription
>                 
>         
>         
>         
>         
>         
>         -- 
>         Abram Demski
>         http://lo-tho.blogspot.com/
>         http://groups.google.com/group/one-logic
>         
> 
> 
> 
> -- 
> Abram Demski
> http://lo-tho.blogspot.com/
> http://groups.google.com/group/one-logic
> agi | Archives  | Modify Your
> Subscription
> 



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to