Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-11 Thread David Jones
Thanks Abram,

I know that probability is one approach. But there are many problems with
using it in actual implementations. I know a lot of people will be angered
by that statement and retort with all the successes that they have had using
probability. But, the truth is that you can solve the problems many ways and
every way has its pros and cons. I personally believe that probability has
unacceptable cons if used all by itself. It must only be used when it is the
best tool for the task.

I do plan to use some probability within my approach. But only when it makes
sense to do so. I do not believe in completely statistical solutions or
completely Bayesian machine learning alone.

A good example of when I might use it is when a particular hypothesis
predicts something with 70% accuracy, well it may be better than any other
hypothesis we can come up with so far. So, we may use that hypothesis. But,
the 30% unexplained errors should be explained if possible with the
resources and algorithms available, if at all possible. This is where my
method differs from statistical methods. I want to build algorithms that
resolve the 30% and explain it. For many problems, there are rules and
knowledge that will solve them effectively. Probability should only be used
when you cannot find a more accurate solution.

Basically we should use probability when we don't know the factors involved,
can't find any rules to explain the phenomena or we don't have the time and
resources to figure it out. So you must simply guess at the most probable
event without any rules for figuring out which event is more applicable
under the current circumstances.

So, in summary, probability definitely has its place. I just think that
explanatory reasoning and other more accurate methods should be preferred
whenever possible.

Regarding learning the knowledge being the bigger problem, I completely
agree. That is why I think it is so important to develop machine learning
that can learn by direct observation of the environment. Without that, it is
practically impossible to gather the knowledge required for AGI-type
applications. We can learn this knowledge by analyzing the world
automatically and generally through video.

My step by step approach for learning and then applying the knowledge for
agi is as follows:
1) Understand and learn about the environment(through Computer Vision for
now and other sensory perceptions in the future)
2) learn about your own actions and how they affect the environment
3) learn about language and how it is associated with or related to the
environment.
4) learn goals from language(such as through dedicated inputs).
5) Goal pursuit
6) Other Miscellaneous capabilities as needed

Dave

On Sat, Jul 10, 2010 at 8:40 PM, Abram Demski  wrote:

> David,
>
> Sorry for the slow response.
>
> I agree completely about expectations vs predictions, though I wouldn't use
> that terminology to make the distinction (since the two terms are
> near-synonyms in English, and I'm not aware of any technical definitions
> that are common in the literature). This is why I think probability theory
> is necessary: to formalize this idea of expectations.
>
> I also agree that it's good to utilize previous knowledge. However, I think
> existing AI research has tackled this over and over; learning that knowledge
> is the bigger problem.
>
> --Abram
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Mechanical Analogy for Neural Operation!

2010-07-11 Thread Steve Richfield
Everyone has heard about the water analogy for electrical operation. I have
a mechanical analogy for neural operation that just might be "solid" enough
to compute at least some characteristics optimally.

No, I am NOT proposing building mechanical contraptions, just using the
concept to compute neuronal characteristics (or AGI formulas for learning).

Suppose neurons were mechanical contraptions, that receive inputs and
communicate outputs via mechanical movements. If one or more of the neurons
connected to an output of a neuron, can't make sense of a given input given
its other inputs, then its mechanism would physically resist the several
inputs that didn't make mutual sense because its mechanism would jam, with
the resistance possibly coming from some downstream neuron.

This would utilize position to resolve opposing forces, e.g. one "force"
being the observed inputs, and the other "force" being that they don't make
sense, suggest some painful outcome, etc. In short, this would enforce the
sort of equation over the present formulaic view of neurons (and AGI coding)
that I have suggested in past postings may be present, and show that the
math may not be all that challenging.

Uncertainty would be expressed in stiffness/flexibility, computed
limitations would be handled with over-running clutches, etc.

Propagation of forces would come close (perfect?) to being able to identify
just where in a complex network something should change to learn as
efficiently as possible.

Once the force concentrates at some point, it then "gives", something slips
or bends, to unjam the mechanism. Thus, learning is effected.

Note that this suggests little difference between forward propagation and
backwards propagation, though real-world wet design considerations would
clearly prefer fast mechanisms for forward propagation, and compact
mechanisms for backwards propagation.

Epiphany or mania?

Any thoughts?

Steve



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com