[agi] Re: Huge Progress on the Core of AGI

2010-07-08 Thread David Jones
I've learned something really interesting today. I realized that general
rules of inference probably don't really exists. There is no such thing as
complete generality for these problems. The rules of inference that work for
one environment would fail in alien environments.

So, I have to modify my approach to solving these problems. As I studied
over simplified problems, I realized that there are probably an infinite
number of environments with their own behaviors that are not representative
of the environments we want to put a general AI in.

So, it is not ok to just come up with any case study and solve it. The case
study has to actually be representative of a problem we want to solve in an
environment we want to apply AI. Otherwise the solution required will take
too long to develop because of it tries to accommodate too much
"generality". As I mentioned, such a general solution is likely impossible.
So, someone could easily get stuck trying to solve an impossible task of
creating one general solution to too many problems that don't allow a
general solution.

The best course is a balance between the time required to write a very
general solution and the time required to write less general solutions for
multiple problem types and environments. The best way to do this is to
choose representative case studies to solve and make sure the solutions are
truth-tropic and justified for the environments they are to be applied.

Dave


On Sun, Jun 27, 2010 at 1:31 AM, David Jones  wrote:

> A method for comparing hypotheses in explanatory-based reasoning: *
>
> We prefer the hypothesis or explanation that ***expects* more
> observations. If both explanations expect the same observations, then the
> simpler of the two is preferred (because the unnecessary terms of the more
> complicated explanation do not add to the predictive power).*
>
> *Why are expected events so important?* They are a measure of 1)
> explanatory power and 2) predictive power. The more predictive and the more
> explanatory a hypothesis is, the more likely the hypothesis is when compared
> to a competing hypothesis.
>
> Here are two case studies I've been analyzing from sensory perception of
> simplified visual input:
> The goal of the case studies is to answer the following: How do you
> generate the most likely motion hypothesis in a way that is general and
> applicable to AGI?
> *Case Study 1)* Here is a link to an example: animated gif of two black
> squares move from left to right.
> *Description: *Two black squares are moving in unison from left to right
> across a white screen. In each frame the black squares shift to the right so
> that square 1 steals square 2's original position and square two moves an
> equal distance to the right.
> *Case Study 2) *Here is a link to an example: the interrupted 
> square.
> *Description:* A single square is moving from left to right. Suddenly in
> the third frame, a single black square is added in the middle of the
> expected path of the original black square. This second square just stays
> there. So, what happened? Did the square moving from left to right keep
> moving? Or did it stop and then another square suddenly appeared and moved
> from left to right?
>
> *Here is a simplified version of how we solve case study 1:
> *The important hypotheses to consider are:
> 1) the square from frame 1 of the video that has a very close position to
> the square from frame 2 should be matched (we hypothesize that they are the
> same square and that any difference in position is motion).  So, what
> happens is that in each two frames of the video, we only match one square.
> The other square goes unmatched.
> 2) We do the same thing as in hypothesis #1, but this time we also match
> the remaining squares and hypothesize motion as follows: the first square
> jumps over the second square from left to right. We hypothesize that this
> happens over and over in each frame of the video. Square 2 stops and square
> 1 jumps over it over and over again.
> 3) We hypothesize that both squares move to the right in unison. This is
> the correct hypothesis.
>
> So, why should we prefer the correct hypothesis, #3 over the other two?
>
> Well, first of all, #3 is correct because it has the most explanatory power
> of the three and is the simplest of the three. Simpler is better because,
> with the given evidence and information, there is no reason to desire a more
> complicated hypothesis such as #2.
>
> So, the answer to the question is because explanation #3 expects the most
> observations, such as:
> 1) the consistent relative positions of the squares in each frame are
> expected.
> 2) It also expects their new positions in each from based on velocity
> calculations.
> 3) It expects both squares to occur in each frame.
>
> Explanation 1 ignores 1 square from each frame of the video, because it
> can't match it. Hypothesis #1 doe

[agi] Re: Huge Progress on the Core of AGI

2010-07-08 Thread David Jones
An easy demonstration of this is visual illusions and even visual mistakes
like one I sent to this list before. Our eyes sometimes infer things that
are not true. It is absolutely necessary for such mistakes to occur because
our sensory interpretation system is optimized for the world we expect to
encounter, which didn't optical illusions during most of our development. A
perfect solution to all visual problems and possible environments is
[likely] impossible. It is ok to fail on optical illusions, since the
failure has no fatal consequences, other than maybe thinking that there is a
water puddle in the middle of the desert :).

Dave

On Thu, Jul 8, 2010 at 3:17 PM, David Jones  wrote:

> I've learned something really interesting today. I realized that general
> rules of inference probably don't really exists. There is no such thing as
> complete generality for these problems. The rules of inference that work for
> one environment would fail in alien environments.
>
> So, I have to modify my approach to solving these problems. As I studied
> over simplified problems, I realized that there are probably an infinite
> number of environments with their own behaviors that are not representative
> of the environments we want to put a general AI in.
>
> So, it is not ok to just come up with any case study and solve it. The case
> study has to actually be representative of a problem we want to solve in an
> environment we want to apply AI. Otherwise the solution required will take
> too long to develop because of it tries to accommodate too much
> "generality". As I mentioned, such a general solution is likely impossible.
> So, someone could easily get stuck trying to solve an impossible task of
> creating one general solution to too many problems that don't allow a
> general solution.
>
> The best course is a balance between the time required to write a very
> general solution and the time required to write less general solutions for
> multiple problem types and environments. The best way to do this is to
> choose representative case studies to solve and make sure the solutions are
> truth-tropic and justified for the environments they are to be applied.
>
> Dave
>
>
> On Sun, Jun 27, 2010 at 1:31 AM, David Jones wrote:
>
>> A method for comparing hypotheses in explanatory-based reasoning: *
>>
>> We prefer the hypothesis or explanation that ***expects* more
>> observations. If both explanations expect the same observations, then the
>> simpler of the two is preferred (because the unnecessary terms of the more
>> complicated explanation do not add to the predictive power).*
>>
>> *Why are expected events so important?* They are a measure of 1)
>> explanatory power and 2) predictive power. The more predictive and the more
>> explanatory a hypothesis is, the more likely the hypothesis is when compared
>> to a competing hypothesis.
>>
>> Here are two case studies I've been analyzing from sensory perception of
>> simplified visual input:
>> The goal of the case studies is to answer the following: How do you
>> generate the most likely motion hypothesis in a way that is general and
>> applicable to AGI?
>> *Case Study 1)* Here is a link to an example: animated gif of two black
>> squares move from left to 
>> right.
>> *Description: *Two black squares are moving in unison from left to right
>> across a white screen. In each frame the black squares shift to the right so
>> that square 1 steals square 2's original position and square two moves an
>> equal distance to the right.
>> *Case Study 2) *Here is a link to an example: the interrupted 
>> square.
>> *Description:* A single square is moving from left to right. Suddenly in
>> the third frame, a single black square is added in the middle of the
>> expected path of the original black square. This second square just stays
>> there. So, what happened? Did the square moving from left to right keep
>> moving? Or did it stop and then another square suddenly appeared and moved
>> from left to right?
>>
>> *Here is a simplified version of how we solve case study 1:
>> *The important hypotheses to consider are:
>> 1) the square from frame 1 of the video that has a very close position to
>> the square from frame 2 should be matched (we hypothesize that they are the
>> same square and that any difference in position is motion).  So, what
>> happens is that in each two frames of the video, we only match one square.
>> The other square goes unmatched.
>> 2) We do the same thing as in hypothesis #1, but this time we also match
>> the remaining squares and hypothesize motion as follows: the first square
>> jumps over the second square from left to right. We hypothesize that this
>> happens over and over in each frame of the video. Square 2 stops and square
>> 1 jumps over it over and over again.
>> 3) We hypothesize that both squares move to the right in unison. This is
>> the correct h

Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-08 Thread Abram Demski
David,

That's why, imho, the rules need to be *learned* (and, when need be,
unlearned). IE, what we need to work on is general learning algorithms, not
general visual processing algorithms.

As you say, there's not even such a thing as a general visual processing
algorithm. Learning algorithms suffer similar environment-dependence, but
(by their nature) not as severe...

--Abram

On Thu, Jul 8, 2010 at 3:17 PM, David Jones  wrote:

> I've learned something really interesting today. I realized that general
> rules of inference probably don't really exists. There is no such thing as
> complete generality for these problems. The rules of inference that work for
> one environment would fail in alien environments.
>
> So, I have to modify my approach to solving these problems. As I studied
> over simplified problems, I realized that there are probably an infinite
> number of environments with their own behaviors that are not representative
> of the environments we want to put a general AI in.
>
> So, it is not ok to just come up with any case study and solve it. The case
> study has to actually be representative of a problem we want to solve in an
> environment we want to apply AI. Otherwise the solution required will take
> too long to develop because of it tries to accommodate too much
> "generality". As I mentioned, such a general solution is likely impossible.
> So, someone could easily get stuck trying to solve an impossible task of
> creating one general solution to too many problems that don't allow a
> general solution.
>
> The best course is a balance between the time required to write a very
> general solution and the time required to write less general solutions for
> multiple problem types and environments. The best way to do this is to
> choose representative case studies to solve and make sure the solutions are
> truth-tropic and justified for the environments they are to be applied.
>
> Dave
>
>
> On Sun, Jun 27, 2010 at 1:31 AM, David Jones wrote:
>
>> A method for comparing hypotheses in explanatory-based reasoning: *
>>
>> We prefer the hypothesis or explanation that ***expects* more
>> observations. If both explanations expect the same observations, then the
>> simpler of the two is preferred (because the unnecessary terms of the more
>> complicated explanation do not add to the predictive power).*
>>
>> *Why are expected events so important?* They are a measure of 1)
>> explanatory power and 2) predictive power. The more predictive and the more
>> explanatory a hypothesis is, the more likely the hypothesis is when compared
>> to a competing hypothesis.
>>
>> Here are two case studies I've been analyzing from sensory perception of
>> simplified visual input:
>> The goal of the case studies is to answer the following: How do you
>> generate the most likely motion hypothesis in a way that is general and
>> applicable to AGI?
>> *Case Study 1)* Here is a link to an example: animated gif of two black
>> squares move from left to 
>> right.
>> *Description: *Two black squares are moving in unison from left to right
>> across a white screen. In each frame the black squares shift to the right so
>> that square 1 steals square 2's original position and square two moves an
>> equal distance to the right.
>> *Case Study 2) *Here is a link to an example: the interrupted 
>> square.
>> *Description:* A single square is moving from left to right. Suddenly in
>> the third frame, a single black square is added in the middle of the
>> expected path of the original black square. This second square just stays
>> there. So, what happened? Did the square moving from left to right keep
>> moving? Or did it stop and then another square suddenly appeared and moved
>> from left to right?
>>
>> *Here is a simplified version of how we solve case study 1:
>> *The important hypotheses to consider are:
>> 1) the square from frame 1 of the video that has a very close position to
>> the square from frame 2 should be matched (we hypothesize that they are the
>> same square and that any difference in position is motion).  So, what
>> happens is that in each two frames of the video, we only match one square.
>> The other square goes unmatched.
>> 2) We do the same thing as in hypothesis #1, but this time we also match
>> the remaining squares and hypothesize motion as follows: the first square
>> jumps over the second square from left to right. We hypothesize that this
>> happens over and over in each frame of the video. Square 2 stops and square
>> 1 jumps over it over and over again.
>> 3) We hypothesize that both squares move to the right in unison. This is
>> the correct hypothesis.
>>
>> So, why should we prefer the correct hypothesis, #3 over the other two?
>>
>> Well, first of all, #3 is correct because it has the most explanatory
>> power of the three and is the simplest of the three. Simpler is better
>> because, with the

Re: [agi] Solomonoff Induction is Not "Universal" and Probability is not "Prediction"

2010-07-08 Thread Abram Demski
Yes, Jim, you seem to be mixing arguments here. I cannot tell which of the
following you intend:

1) Solomonoff induction is useless because it would produce very bad
predictions if we could compute them.
2) Solomonoff induction is useless because we can't compute its predictions.

Are you trying to reject #1 and assert #2, reject #2 and assert #1, or
assert both #1 and #2?

Or some third statement?

--Abram

On Wed, Jul 7, 2010 at 7:09 PM, Matt Mahoney  wrote:

> Who is talking about efficiency? An infinite sequence of uncomputable
> values is still just as uncomputable. I don't disagree that AIXI and
> Solomonoff induction are not computable. But you are also arguing that they
> are wrong.
>
>
> -- Matt Mahoney, matmaho...@yahoo.com
>
>
> --
> *From:* Jim Bromer 
> *To:* agi 
> *Sent:* Wed, July 7, 2010 6:40:52 PM
> *Subject:* Re: [agi] Solomonoff Induction is Not "Universal" and
> Probability is not "Prediction"
>
> Matt,
> But you are still saying that Solomonoff Induction has to be recomputed for
> each possible combination of bit value aren't you?  Although this doesn't
> matter when you are dealing with infinite computations in the first place,
> it does matter when you are wondering if this has anything to do with AGI
> and compression efficiencies.
> Jim Bromer
> On Wed, Jul 7, 2010 at 5:44 PM, Matt Mahoney  wrote:
>
>>Jim Bromer wrote:
>> > But, a more interesting question is, given that the first digits are
>> 000, what are the chances that the next digit will be 1?  Dim Induction will
>> report .5, which of course is nonsense and a whole less useful than making a
>> rough guess.
>>
>> Wrong. The probability of a 1 is p(0001)/(p()+p(0001)) where the
>> probabilities are computed using Solomonoff induction. A program that
>> outputs  will be shorter in most languages than a program that outputs
>> 0001, so 0 is the most likely next bit.
>>
>> More generally, probability and prediction are equivalent by the chain
>> rule. Given any 2 strings x followed by y, the prediction p(y|x) =
>> p(xy)/p(x).
>>
>>
>> -- Matt Mahoney, matmaho...@yahoo.com
>>
>>
>>  --
>> *From:* Jim Bromer 
>> *To:* agi 
>> *Sent:* Wed, July 7, 2010 10:10:37 AM
>> *Subject:* [agi] Solomonoff Induction is Not "Universal" and Probability
>> is not "Prediction"
>>
>> Suppose you have sets of "programs" that produce two strings.  One set of
>> outputs is 00 and the other is 11. Now suppose you used these sets
>> of programs to chart the probabilities of the output of the strings.  If the
>> two strings were each output by the same number of programs then you'd have
>> a .5 probability that either string would be output.  That's ok.  But, a
>> more interesting question is, given that the first digits are 000, what are
>> the chances that the next digit will be 1?  Dim Induction will report .5,
>> which of course is nonsense and a whole less useful than making a rough
>> guess.
>>
>> But, of course, Solomonoff Induction purports to be able, if it was
>> feasible, to compute the possibilities for all possible programs.  Ok, but
>> now, try thinking about this a little bit.  If you have ever tried writing
>> random program instructions what do you usually get?  Well, I'll take a
>> hazard and guess (a lot better than the bogus method of confusing shallow
>> probability with "prediction" in my example) and say that you will get a lot
>> of programs that crash.  Well, most of my experiments with that have ended
>> up with programs that go into an infinite loop or which crash.  Now on a
>> universal Turing machine, the results would probably look a little
>> different.  Some strings will output nothing and go into an infinite loop.
>> Some programs will output something and then either stop outputting anything
>> or start outputting an infinite loop of the same substring.  Other programs
>> will go on to infinity producing something that looks like random strings.
>> But the idea that all possible programs would produce well distributed
>> strings is complete hogwash.  Since Solomonoff Induction does not define
>> what kind of programs should be used, the assumption that the distribution
>> would produce useful data is absurd.  In particular, the use of the method
>> to determine the probability based given an initial string (as in what
>> follows given the first digits are 000) is wrong as in really wrong.  The
>> idea that this crude probability can be used as "prediction" is
>> unsophisticated.
>>
>> Of course you could develop an infinite set of Solomonoff Induction values
>> for each possible given initial sequence of digits.  Hey when you're working
>> with infeasible functions why not dream anything?
>>
>> I might be wrong of course.  Maybe there is something you guys
>> haven't been able to get across to me.  Even if you can think for yourself
>> you can still make mistakes.  So if anyone has actually tried writing a
>> program to output all possible prog

Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-08 Thread David Jones
It may not be possible to create a learning algorithm that can learn how to
generally process images and other general AGI problems. This is for the
same reason that completely general vision algorithms are likely impossible.
I think that figuring out how to process sensory information intelligently
requires either 1) impossible amounts of processing or 2) intelligent design
and understanding by us.

Maybe you could be more specific about how general learning algorithms would
solve problems such as the one I'm tackling. But, I am extremely doubtful it
can be done because the problems cannot be effectively described to such an
algorithm. If you can't describe the problem, it can't search for solutions.
If it can't search for solutions, you're basically stuck with evolution type
algorithms, which require prohibitory amounts of processing.

The reason that vision is so important for learning is that sensory
perception is the foundation required to learn everything else. If you don't
start with a foundational problem like this, you won't be representing the
real nature of general intelligence problems that require extensive
knowledge of the world to solve properly. Sensory perception is required to
learn the information needed to understand everything else. Text and
language for example, require extensive knowledge about the world to
understand and especially to learn about. If you start with general learning
algorithms on these unrepresentative problems, you will get stuck as we
already have.

So, it still makes a lot of sense to start with a concrete problem that does
not require extensive amounts of previous knowledge to start learning. In
fact, AGI requires that you not pre-program the AI with such extensive
knowledge. So, lots of people are working on "general" learning algorithms
that are unrepresentative of what is required for AGI because the algorithms
don't have the knowledge needed to learn what they are trying to learn
about. Regardless of how you look at it, my approach is definitely the right
approach to AGI in my opinion.



On Thu, Jul 8, 2010 at 5:02 PM, Abram Demski  wrote:

> David,
>
> That's why, imho, the rules need to be *learned* (and, when need be,
> unlearned). IE, what we need to work on is general learning algorithms, not
> general visual processing algorithms.
>
> As you say, there's not even such a thing as a general visual processing
> algorithm. Learning algorithms suffer similar environment-dependence, but
> (by their nature) not as severe...
>
> --Abram
>
> On Thu, Jul 8, 2010 at 3:17 PM, David Jones  wrote:
>
>> I've learned something really interesting today. I realized that general
>> rules of inference probably don't really exists. There is no such thing as
>> complete generality for these problems. The rules of inference that work for
>> one environment would fail in alien environments.
>>
>> So, I have to modify my approach to solving these problems. As I studied
>> over simplified problems, I realized that there are probably an infinite
>> number of environments with their own behaviors that are not representative
>> of the environments we want to put a general AI in.
>>
>> So, it is not ok to just come up with any case study and solve it. The
>> case study has to actually be representative of a problem we want to solve
>> in an environment we want to apply AI. Otherwise the solution required will
>> take too long to develop because of it tries to accommodate too much
>> "generality". As I mentioned, such a general solution is likely impossible.
>> So, someone could easily get stuck trying to solve an impossible task of
>> creating one general solution to too many problems that don't allow a
>> general solution.
>>
>> The best course is a balance between the time required to write a very
>> general solution and the time required to write less general solutions for
>> multiple problem types and environments. The best way to do this is to
>> choose representative case studies to solve and make sure the solutions are
>> truth-tropic and justified for the environments they are to be applied.
>>
>> Dave
>>
>>
>> On Sun, Jun 27, 2010 at 1:31 AM, David Jones wrote:
>>
>>> A method for comparing hypotheses in explanatory-based reasoning: *
>>>
>>> We prefer the hypothesis or explanation that ***expects* more
>>> observations. If both explanations expect the same observations, then the
>>> simpler of the two is preferred (because the unnecessary terms of the more
>>> complicated explanation do not add to the predictive power).*
>>>
>>> *Why are expected events so important?* They are a measure of 1)
>>> explanatory power and 2) predictive power. The more predictive and the more
>>> explanatory a hypothesis is, the more likely the hypothesis is when compared
>>> to a competing hypothesis.
>>>
>>> Here are two case studies I've been analyzing from sensory perception of
>>> simplified visual input:
>>> The goal of the case studies is to answer the following: How do you
>>> generate

Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-08 Thread Abram Demski
David,

How I'd present the problem would be "predict the next frame," or more
generally predict a specified portion of video given a different portion. Do
you object to this approach?

--Abram

On Thu, Jul 8, 2010 at 5:30 PM, David Jones  wrote:

> It may not be possible to create a learning algorithm that can learn how to
> generally process images and other general AGI problems. This is for the
> same reason that completely general vision algorithms are likely impossible.
> I think that figuring out how to process sensory information intelligently
> requires either 1) impossible amounts of processing or 2) intelligent design
> and understanding by us.
>
> Maybe you could be more specific about how general learning algorithms
> would solve problems such as the one I'm tackling. But, I am extremely
> doubtful it can be done because the problems cannot be effectively described
> to such an algorithm. If you can't describe the problem, it can't search for
> solutions. If it can't search for solutions, you're basically stuck with
> evolution type algorithms, which require prohibitory amounts of processing.
>
> The reason that vision is so important for learning is that sensory
> perception is the foundation required to learn everything else. If you don't
> start with a foundational problem like this, you won't be representing the
> real nature of general intelligence problems that require extensive
> knowledge of the world to solve properly. Sensory perception is required to
> learn the information needed to understand everything else. Text and
> language for example, require extensive knowledge about the world to
> understand and especially to learn about. If you start with general learning
> algorithms on these unrepresentative problems, you will get stuck as we
> already have.
>
> So, it still makes a lot of sense to start with a concrete problem that
> does not require extensive amounts of previous knowledge to start learning.
> In fact, AGI requires that you not pre-program the AI with such extensive
> knowledge. So, lots of people are working on "general" learning algorithms
> that are unrepresentative of what is required for AGI because the algorithms
> don't have the knowledge needed to learn what they are trying to learn
> about. Regardless of how you look at it, my approach is definitely the right
> approach to AGI in my opinion.
>
>
>
> On Thu, Jul 8, 2010 at 5:02 PM, Abram Demski wrote:
>
>> David,
>>
>> That's why, imho, the rules need to be *learned* (and, when need be,
>> unlearned). IE, what we need to work on is general learning algorithms, not
>> general visual processing algorithms.
>>
>> As you say, there's not even such a thing as a general visual processing
>> algorithm. Learning algorithms suffer similar environment-dependence, but
>> (by their nature) not as severe...
>>
>> --Abram
>>
>> On Thu, Jul 8, 2010 at 3:17 PM, David Jones wrote:
>>
>>> I've learned something really interesting today. I realized that general
>>> rules of inference probably don't really exists. There is no such thing as
>>> complete generality for these problems. The rules of inference that work for
>>> one environment would fail in alien environments.
>>>
>>> So, I have to modify my approach to solving these problems. As I studied
>>> over simplified problems, I realized that there are probably an infinite
>>> number of environments with their own behaviors that are not representative
>>> of the environments we want to put a general AI in.
>>>
>>> So, it is not ok to just come up with any case study and solve it. The
>>> case study has to actually be representative of a problem we want to solve
>>> in an environment we want to apply AI. Otherwise the solution required will
>>> take too long to develop because of it tries to accommodate too much
>>> "generality". As I mentioned, such a general solution is likely impossible.
>>> So, someone could easily get stuck trying to solve an impossible task of
>>> creating one general solution to too many problems that don't allow a
>>> general solution.
>>>
>>> The best course is a balance between the time required to write a very
>>> general solution and the time required to write less general solutions for
>>> multiple problem types and environments. The best way to do this is to
>>> choose representative case studies to solve and make sure the solutions are
>>> truth-tropic and justified for the environments they are to be applied.
>>>
>>> Dave
>>>
>>>
>>> On Sun, Jun 27, 2010 at 1:31 AM, David Jones wrote:
>>>
 A method for comparing hypotheses in explanatory-based reasoning: *

 We prefer the hypothesis or explanation that ***expects* more
 observations. If both explanations expect the same observations, then the
 simpler of the two is preferred (because the unnecessary terms of the more
 complicated explanation do not add to the predictive power).*

 *Why are expected events so important?* They are a measure of 1)
 explanatory po

Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-08 Thread David Jones
Abram,

Yeah, I would have to object for a couple reasons.

First, prediction requires previous knowledge. So, even if you make that
your primary goal, you're still going to have my research goals as the
prerequisite: which are to process visual information in a more general way
and learn about the environment in a more general way.

Second, not everything is predictable. Certainly, we should not try to
predict everything. Only after we have experience, can we actually predict
anything. Even then, it's not precise prediction, like predicting the next
frame of a video. It's more like having knowledge of what is quite likely to
occur, or maybe an approximate prediction, but not guaranteed in the least.
For example, based on previous experience, striking a match will light it.
But, sometimes it doesn't light, and that too is expected to occur
sometimes. We definitely don't predict the next image we'll see when it
lights though. We just have expectations for what we might see and this
helps us interpret the image effectively. We should try to "expect" certain
outcomes or possible outcomes though. You could call that prediction, but
it's not quite the same. The things we are more likely to see should be
attempted as an explanation first and preferred if not given a reason to
think otherwise.


Dave


On Thu, Jul 8, 2010 at 5:51 PM, Abram Demski  wrote:

> David,
>
> How I'd present the problem would be "predict the next frame," or more
> generally predict a specified portion of video given a different portion. Do
> you object to this approach?
>
> --Abram
>
> On Thu, Jul 8, 2010 at 5:30 PM, David Jones  wrote:
>
>> It may not be possible to create a learning algorithm that can learn how
>> to generally process images and other general AGI problems. This is for the
>> same reason that completely general vision algorithms are likely impossible.
>> I think that figuring out how to process sensory information intelligently
>> requires either 1) impossible amounts of processing or 2) intelligent design
>> and understanding by us.
>>
>> Maybe you could be more specific about how general learning algorithms
>> would solve problems such as the one I'm tackling. But, I am extremely
>> doubtful it can be done because the problems cannot be effectively described
>> to such an algorithm. If you can't describe the problem, it can't search for
>> solutions. If it can't search for solutions, you're basically stuck with
>> evolution type algorithms, which require prohibitory amounts of processing.
>>
>> The reason that vision is so important for learning is that sensory
>> perception is the foundation required to learn everything else. If you don't
>> start with a foundational problem like this, you won't be representing the
>> real nature of general intelligence problems that require extensive
>> knowledge of the world to solve properly. Sensory perception is required to
>> learn the information needed to understand everything else. Text and
>> language for example, require extensive knowledge about the world to
>> understand and especially to learn about. If you start with general learning
>> algorithms on these unrepresentative problems, you will get stuck as we
>> already have.
>>
>> So, it still makes a lot of sense to start with a concrete problem that
>> does not require extensive amounts of previous knowledge to start learning.
>> In fact, AGI requires that you not pre-program the AI with such extensive
>> knowledge. So, lots of people are working on "general" learning algorithms
>> that are unrepresentative of what is required for AGI because the algorithms
>> don't have the knowledge needed to learn what they are trying to learn
>> about. Regardless of how you look at it, my approach is definitely the right
>> approach to AGI in my opinion.
>>
>>
>>
>> On Thu, Jul 8, 2010 at 5:02 PM, Abram Demski wrote:
>>
>>> David,
>>>
>>> That's why, imho, the rules need to be *learned* (and, when need be,
>>> unlearned). IE, what we need to work on is general learning algorithms, not
>>> general visual processing algorithms.
>>>
>>> As you say, there's not even such a thing as a general visual processing
>>> algorithm. Learning algorithms suffer similar environment-dependence, but
>>> (by their nature) not as severe...
>>>
>>> --Abram
>>>
>>> On Thu, Jul 8, 2010 at 3:17 PM, David Jones wrote:
>>>
 I've learned something really interesting today. I realized that general
 rules of inference probably don't really exists. There is no such thing as
 complete generality for these problems. The rules of inference that work 
 for
 one environment would fail in alien environments.

 So, I have to modify my approach to solving these problems. As I studied
 over simplified problems, I realized that there are probably an infinite
 number of environments with their own behaviors that are not representative
 of the environments we want to put a general AI in.

 So, it is not ok to just come up wi

Re: [agi] Re: Huge Progress on the Core of AGI

2010-07-08 Thread Mike Tintner
Isn't the first problem simply to differentiate the objects in a scene?  (Maybe 
the most important movement to begin with is not  the movement of the object, 
but of the viewer changing their POV if only slightly  - wh. won't be a factor 
if you're "looking" at a screen)

And that I presume comes down to being able to put a crude, highly tentative, 
and fluid outline round them (something that won't be neces. if you're dealing 
with squares?) . Without knowing v. little if anything about what kind of 
objects they are. As an infant most likely does. {See infants' drawings and how 
they evolve v. gradually from a v. crude outline blob that at first can 
represent anything - that I'm suggesting is a "replay" of how visual perception 
developed).

The fluid outline or image schema is arguably the basis of all intelligence - 
just about everything AGI is based on it.  You need an outline for instance not 
just of objects, but of where you're going, and what you're going to try and do 
- if you want to survive in the real world.  Schemas connect everything AGI.

And it's not a matter of choice - first you have to have an outline/sense of 
the whole - whatever it is -  before you can start filling in the parts.

P.S. It would be mindblowingly foolish BTW to think you can do better than the 
way an infant learns to see - that's an awfully big visual section of the brain 
there, and it works.


David,

How I'd present the problem would be "predict the next frame," or more 
generally predict a specified portion of video given a different portion. Do 
you object to this approach?

--Abram


On Thu, Jul 8, 2010 at 5:30 PM, David Jones  wrote:

  It may not be possible to create a learning algorithm that can learn how to 
generally process images and other general AGI problems. This is for the same 
reason that completely general vision algorithms are likely impossible. I think 
that figuring out how to process sensory information intelligently requires 
either 1) impossible amounts of processing or 2) intelligent design and 
understanding by us. 

  Maybe you could be more specific about how general learning algorithms would 
solve problems such as the one I'm tackling. But, I am extremely doubtful it 
can be done because the problems cannot be effectively described to such an 
algorithm. If you can't describe the problem, it can't search for solutions. If 
it can't search for solutions, you're basically stuck with evolution type 
algorithms, which require prohibitory amounts of processing.

  The reason that vision is so important for learning is that sensory 
perception is the foundation required to learn everything else. If you don't 
start with a foundational problem like this, you won't be representing the real 
nature of general intelligence problems that require extensive knowledge of the 
world to solve properly. Sensory perception is required to learn the 
information needed to understand everything else. Text and language for 
example, require extensive knowledge about the world to understand and 
especially to learn about. If you start with general learning algorithms on 
these unrepresentative problems, you will get stuck as we already have.

  So, it still makes a lot of sense to start with a concrete problem that does 
not require extensive amounts of previous knowledge to start learning. In fact, 
AGI requires that you not pre-program the AI with such extensive knowledge. So, 
lots of people are working on "general" learning algorithms that are 
unrepresentative of what is required for AGI because the algorithms don't have 
the knowledge needed to learn what they are trying to learn about. Regardless 
of how you look at it, my approach is definitely the right approach to AGI in 
my opinion.




  On Thu, Jul 8, 2010 at 5:02 PM, Abram Demski  wrote:

David,

That's why, imho, the rules need to be *learned* (and, when need be, 
unlearned). IE, what we need to work on is general learning algorithms, not 
general visual processing algorithms.

As you say, there's not even such a thing as a general visual processing 
algorithm. Learning algorithms suffer similar environment-dependence, but (by 
their nature) not as severe...

--Abram


On Thu, Jul 8, 2010 at 3:17 PM, David Jones  wrote:

  I've learned something really interesting today. I realized that general 
rules of inference probably don't really exists. There is no such thing as 
complete generality for these problems. The rules of inference that work for 
one environment would fail in alien environments. 

  So, I have to modify my approach to solving these problems. As I studied 
over simplified problems, I realized that there are probably an infinite number 
of environments with their own behaviors that are not representative of the 
environments we want to put a general AI in. 

  So, it is not ok to just come up with any case study and solve it. The 
case study has to actually be representative of a problem we wa

Re: [agi] New KurzweilAI.net site... with my silly article & sillier chatbot ;-p ;) ....

2010-07-08 Thread Mike Archbold
The concept of "citizen science" sounds great, Ben -- especially in
this age.  From my own perspective I feel like my ideas are good but
it falls short always of the rigor of a proper scientist, so I don't
have that pretense.  The internet obviously helps out a lot.The
plight of the solitary laborer is better than it used to be, I think,
due to the availability of information/research.

Mike Archbold

On Mon, Jul 5, 2010 at 8:52 PM, Ben Goertzel  wrote:
> Check out my article on the H+ Summit
>
> http://www.kurzweilai.net/h-summit-harvard-the-rise-of-the-citizen-scientist
>
> and also the Ramona4 chatbot that Novamente LLC built for Ray Kurzweil
> a while back
>
> http://www.kurzweilai.net/ramona4/ramona.html
>
> It's not AGI at all; but it's pretty funny ;-)
>
> -- Ben
>
>
>
> --
> Ben Goertzel, PhD
> CEO, Novamente LLC and Biomind LLC
> CTO, Genescient Corp
> Vice Chairman, Humanity+
> Advisor, Singularity University and Singularity Institute
> External Research Professor, Xiamen University, China
> b...@goertzel.org
>
> "
> “When nothing seems to help, I go look at a stonecutter hammering away
> at his rock, perhaps a hundred times without as much as a crack
> showing in it. Yet at the hundred and first blow it will split in two,
> and I know it was not that blow that did it, but all that had gone
> before.”
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com