David,

Sorry for the slow response.

I agree completely about expectations vs predictions, though I wouldn't use
that terminology to make the distinction (since the two terms are
near-synonyms in English, and I'm not aware of any technical definitions
that are common in the literature). This is why I think probability theory
is necessary: to formalize this idea of expectations.

I also agree that it's good to utilize previous knowledge. However, I think
existing AI research has tackled this over and over; learning that knowledge
is the bigger problem.

--Abram

On Thu, Jul 8, 2010 at 6:32 PM, David Jones <davidher...@gmail.com> wrote:

> Abram,
>
> Yeah, I would have to object for a couple reasons.
>
> First, prediction requires previous knowledge. So, even if you make that
> your primary goal, you're still going to have my research goals as the
> prerequisite: which are to process visual information in a more general way
> and learn about the environment in a more general way.
>
> Second, not everything is predictable. Certainly, we should not try to
> predict everything. Only after we have experience, can we actually predict
> anything. Even then, it's not precise prediction, like predicting the next
> frame of a video. It's more like having knowledge of what is quite likely to
> occur, or maybe an approximate prediction, but not guaranteed in the least.
> For example, based on previous experience, striking a match will light it.
> But, sometimes it doesn't light, and that too is expected to occur
> sometimes. We definitely don't predict the next image we'll see when it
> lights though. We just have expectations for what we might see and this
> helps us interpret the image effectively. We should try to "expect" certain
> outcomes or possible outcomes though. You could call that prediction, but
> it's not quite the same. The things we are more likely to see should be
> attempted as an explanation first and preferred if not given a reason to
> think otherwise.
>
>
> Dave
>
>
> On Thu, Jul 8, 2010 at 5:51 PM, Abram Demski <abramdem...@gmail.com>wrote:
>
>> David,
>>
>> How I'd present the problem would be "predict the next frame," or more
>> generally predict a specified portion of video given a different portion. Do
>> you object to this approach?
>>
>> --Abram
>>
>> On Thu, Jul 8, 2010 at 5:30 PM, David Jones <davidher...@gmail.com>wrote:
>>
>>> It may not be possible to create a learning algorithm that can learn how
>>> to generally process images and other general AGI problems. This is for the
>>> same reason that completely general vision algorithms are likely impossible.
>>> I think that figuring out how to process sensory information intelligently
>>> requires either 1) impossible amounts of processing or 2) intelligent design
>>> and understanding by us.
>>>
>>> Maybe you could be more specific about how general learning algorithms
>>> would solve problems such as the one I'm tackling. But, I am extremely
>>> doubtful it can be done because the problems cannot be effectively described
>>> to such an algorithm. If you can't describe the problem, it can't search for
>>> solutions. If it can't search for solutions, you're basically stuck with
>>> evolution type algorithms, which require prohibitory amounts of processing.
>>>
>>> The reason that vision is so important for learning is that sensory
>>> perception is the foundation required to learn everything else. If you don't
>>> start with a foundational problem like this, you won't be representing the
>>> real nature of general intelligence problems that require extensive
>>> knowledge of the world to solve properly. Sensory perception is required to
>>> learn the information needed to understand everything else. Text and
>>> language for example, require extensive knowledge about the world to
>>> understand and especially to learn about. If you start with general learning
>>> algorithms on these unrepresentative problems, you will get stuck as we
>>> already have.
>>>
>>> So, it still makes a lot of sense to start with a concrete problem that
>>> does not require extensive amounts of previous knowledge to start learning.
>>> In fact, AGI requires that you not pre-program the AI with such extensive
>>> knowledge. So, lots of people are working on "general" learning algorithms
>>> that are unrepresentative of what is required for AGI because the algorithms
>>> don't have the knowledge needed to learn what they are trying to learn
>>> about. Regardless of how you look at it, my approach is definitely the right
>>> approach to AGI in my opinion.
>>>
>>>
>>>
>>> On Thu, Jul 8, 2010 at 5:02 PM, Abram Demski <abramdem...@gmail.com>wrote:
>>>
>>>> David,
>>>>
>>>> That's why, imho, the rules need to be *learned* (and, when need be,
>>>> unlearned). IE, what we need to work on is general learning algorithms, not
>>>> general visual processing algorithms.
>>>>
>>>> As you say, there's not even such a thing as a general visual processing
>>>> algorithm. Learning algorithms suffer similar environment-dependence, but
>>>> (by their nature) not as severe...
>>>>
>>>> --Abram
>>>>
>>>> On Thu, Jul 8, 2010 at 3:17 PM, David Jones <davidher...@gmail.com>wrote:
>>>>
>>>>> I've learned something really interesting today. I realized that
>>>>> general rules of inference probably don't really exists. There is no such
>>>>> thing as complete generality for these problems. The rules of inference 
>>>>> that
>>>>> work for one environment would fail in alien environments.
>>>>>
>>>>> So, I have to modify my approach to solving these problems. As I
>>>>> studied over simplified problems, I realized that there are probably an
>>>>> infinite number of environments with their own behaviors that are not
>>>>> representative of the environments we want to put a general AI in.
>>>>>
>>>>> So, it is not ok to just come up with any case study and solve it. The
>>>>> case study has to actually be representative of a problem we want to solve
>>>>> in an environment we want to apply AI. Otherwise the solution required 
>>>>> will
>>>>> take too long to develop because of it tries to accommodate too much
>>>>> "generality". As I mentioned, such a general solution is likely 
>>>>> impossible.
>>>>> So, someone could easily get stuck trying to solve an impossible task of
>>>>> creating one general solution to too many problems that don't allow a
>>>>> general solution.
>>>>>
>>>>> The best course is a balance between the time required to write a very
>>>>> general solution and the time required to write less general solutions for
>>>>> multiple problem types and environments. The best way to do this is to
>>>>> choose representative case studies to solve and make sure the solutions 
>>>>> are
>>>>> truth-tropic and justified for the environments they are to be applied.
>>>>>
>>>>> Dave
>>>>>
>>>>>
>>>>> On Sun, Jun 27, 2010 at 1:31 AM, David Jones <davidher...@gmail.com>wrote:
>>>>>
>>>>>> A method for comparing hypotheses in explanatory-based reasoning: *
>>>>>>
>>>>>> We prefer the hypothesis or explanation that ***expects* more
>>>>>> observations. If both explanations expect the same observations, then the
>>>>>> simpler of the two is preferred (because the unnecessary terms of the 
>>>>>> more
>>>>>> complicated explanation do not add to the predictive power).*
>>>>>>
>>>>>> *Why are expected events so important?* They are a measure of 1)
>>>>>> explanatory power and 2) predictive power. The more predictive and the 
>>>>>> more
>>>>>> explanatory a hypothesis is, the more likely the hypothesis is when 
>>>>>> compared
>>>>>> to a competing hypothesis.
>>>>>>
>>>>>> Here are two case studies I've been analyzing from sensory perception
>>>>>> of simplified visual input:
>>>>>> The goal of the case studies is to answer the following: How do you
>>>>>> generate the most likely motion hypothesis in a way that is general and
>>>>>> applicable to AGI?
>>>>>> *Case Study 1)* Here is a link to an example: animated gif of two
>>>>>> black squares move from left to 
>>>>>> right<http://practicalai.org/images/CaseStudy1.gif>.
>>>>>> *Description: *Two black squares are moving in unison from left to
>>>>>> right across a white screen. In each frame the black squares shift to the
>>>>>> right so that square 1 steals square 2's original position and square two
>>>>>> moves an equal distance to the right.
>>>>>> *Case Study 2) *Here is a link to an example: the interrupted 
>>>>>> square<http://practicalai.org/images/CaseStudy2.gif>.
>>>>>> *Description:* A single square is moving from left to right. Suddenly
>>>>>> in the third frame, a single black square is added in the middle of the
>>>>>> expected path of the original black square. This second square just stays
>>>>>> there. So, what happened? Did the square moving from left to right keep
>>>>>> moving? Or did it stop and then another square suddenly appeared and 
>>>>>> moved
>>>>>> from left to right?
>>>>>>
>>>>>> *Here is a simplified version of how we solve case study 1:
>>>>>> *The important hypotheses to consider are:
>>>>>> 1) the square from frame 1 of the video that has a very close position
>>>>>> to the square from frame 2 should be matched (we hypothesize that they 
>>>>>> are
>>>>>> the same square and that any difference in position is motion).  So, what
>>>>>> happens is that in each two frames of the video, we only match one 
>>>>>> square.
>>>>>> The other square goes unmatched.
>>>>>> 2) We do the same thing as in hypothesis #1, but this time we also
>>>>>> match the remaining squares and hypothesize motion as follows: the first
>>>>>> square jumps over the second square from left to right. We hypothesize 
>>>>>> that
>>>>>> this happens over and over in each frame of the video. Square 2 stops and
>>>>>> square 1 jumps over it.... over and over again.
>>>>>> 3) We hypothesize that both squares move to the right in unison. This
>>>>>> is the correct hypothesis.
>>>>>>
>>>>>> So, why should we prefer the correct hypothesis, #3 over the other
>>>>>> two?
>>>>>>
>>>>>> Well, first of all, #3 is correct because it has the most explanatory
>>>>>> power of the three and is the simplest of the three. Simpler is better
>>>>>> because, with the given evidence and information, there is no reason to
>>>>>> desire a more complicated hypothesis such as #2.
>>>>>>
>>>>>> So, the answer to the question is because explanation #3 expects the
>>>>>> most observations, such as:
>>>>>> 1) the consistent relative positions of the squares in each frame are
>>>>>> expected.
>>>>>> 2) It also expects their new positions in each from based on velocity
>>>>>> calculations.
>>>>>> 3) It expects both squares to occur in each frame.
>>>>>>
>>>>>> Explanation 1 ignores 1 square from each frame of the video, because
>>>>>> it can't match it. Hypothesis #1 doesn't have a reason for why the a new
>>>>>> square appears in each frame and why one disappears. It doesn't expect 
>>>>>> these
>>>>>> observations. In fact, explanation 1 doesn't expect anything that happens
>>>>>> because something new happens in each frame, which doesn't give it a 
>>>>>> chance
>>>>>> to confirm its hypotheses in subsequent frames.
>>>>>>
>>>>>> The power of this method is immediately clear. It is general and it
>>>>>> solves the problem very cleanly.
>>>>>>
>>>>>> *Here is a simplified version of how we solve case study 2:*
>>>>>> We expect the original square to move at a similar velocity from left
>>>>>> to right because we hypothesized that it did move from left to right and 
>>>>>> we
>>>>>> calculated its velocity. If this expectation is confirmed, then it is 
>>>>>> more
>>>>>> likely than saying that the square suddenly stopped and another started
>>>>>> moving. Such a change would be unexpected and such a conclusion would be
>>>>>> unjustifiable.
>>>>>>
>>>>>> I also believe that explanations which generate fewer incorrect
>>>>>> expectations should be preferred over those that more incorrect
>>>>>> expectations.
>>>>>>
>>>>>> The idea I came up with earlier this month regarding high frame rates
>>>>>> to reduce uncertainty is still applicable. It is important that all
>>>>>> generated hypotheses have as low uncertainty as possible given our
>>>>>> constraints and resources available.
>>>>>>
>>>>>> I thought I'd share my progress with you all. I'll be testing the
>>>>>> ideas on test cases such as the ones I mentioned in the coming days and
>>>>>> weeks.
>>>>>>
>>>>>> Dave
>>>>>>
>>>>>
>>>>>    *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>>>>> <https://www.listbox.com/member/archive/rss/303/> | 
>>>>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>>>>> <http://www.listbox.com>
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Abram Demski
>>>> http://lo-tho.blogspot.com/
>>>> http://groups.google.com/group/one-logic
>>>>    *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>>>> <https://www.listbox.com/member/archive/rss/303/> | 
>>>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>>>> <http://www.listbox.com>
>>>>
>>>
>>>    *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>>> <https://www.listbox.com/member/archive/rss/303/> | 
>>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>>> <http://www.listbox.com>
>>>
>>
>>
>>
>> --
>> Abram Demski
>> http://lo-tho.blogspot.com/
>> http://groups.google.com/group/one-logic
>>    *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/> | 
>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>> <http://www.listbox.com>
>>
>
>    *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-- 
Abram Demski
http://lo-tho.blogspot.com/
http://groups.google.com/group/one-logic



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to