Abram,

I haven't found a method that I think works consistently yet. Basically I
was trying methods like the one you suggested, which measures the number of
correct predictions or expectations. But, then I ran into the problem of,
what if the predictions you are counting are more of the same? Do you count
them or not? For example, lets say that we see a piece of paper on a table
in an image and we see that the paper looks different but moves with the
table. So, we can hypothesize that they are attached. Now what if it is not
a piece of paper, but a mural. Do you count every little piece of the mural
that moves with the desk as a correct prediction? Is it a single prediction?
What about the number of times they move together? It doesn't seem right to
count each and every time, but we also have to be careful about coincidental
movement together. Just because it seems to move together in one frame out
of 1000 does not mean we should consider them temporarily attached.

So, quantitatively defining simpler and predictive is quite challenging. I
am honestly a bit stumped at how to do it at the moment. I will keep trying
to find ways to at least approximate it, but I'm really not sure the best
way.

Of course, I haven't been working on this specific problem long, but other
people have tried to quantify our explanatory methods in other areas and
have also failed. I think part of the failure has to do with the fact that
the things they want to explain using the same method should probably use
different methods and should be more heuristic than mathematically precise.
It's all quite overwhelming to analyze sometimes.

I may have thought about fractions correct vs. incorrect also. The truth is,
I haven't locked on and carefully analyzed the different ideas I've come up
with because they all seem to have issues and it is difficult to analyze. I
definitely need to try some out and just see what the results are and
document them better.

Dave

On Thu, Jul 22, 2010 at 10:23 PM, Abram Demski <abramdem...@gmail.com>wrote:

> David,
>
> What are the different ways you are thinking of for measuring the
> predictiveness? I can think of a few different possibilities (such as
> measuring number incorrect vs measuring fraction incorrect, et cetera) but
> I'm wondering which variations you consider significant/troublesome/etc.
>
> --Abram
>
> On Thu, Jul 22, 2010 at 7:12 PM, David Jones <davidher...@gmail.com>wrote:
>
>> It's certainly not as simple as you claim. First, assigning a probability
>> is not always possible, nor is it easy. The factors in calculating that
>> probability are unknown and are not the same for every instance. Since we do
>> not know what combination of observations we will see, we cannot have a
>> predefined set of probabilities, nor is it any easier to create a
>> probability function that generates them for us. That is just as exactly
>> what I meant by quantitatively define the predictiveness... it would be
>> proportional to the probability.
>>
>> Second, if you can define a program ina way that is always simpler when it
>> is smaller, then you can do the same thing without a program. I don't think
>> it makes any sense to do it this way.
>>
>> It is not that simple. If it was, we could solve a large portion of agi
>> easily.
>>
>> On Thu, Jul 22, 2010 at 3:16 PM, Matt Mahoney <matmaho...@yahoo.com>
>> wrote:
>>
>> David Jones wrote:
>>
>> > But, I am amazed at how difficult it is to quantitatively define more
>> predictive and simpler for specific problems.
>>
>> It isn't hard. To measure predictiveness, you assign a probability to each
>> possible outcome. If the actual outcome has probability p, you score a
>> penalty of log(1/p) bits. To measure simplicity, use the compressed size of
>> the code for your prediction algorithm. Then add the two scores together.
>> That's how it is done in the Calgary challenge
>> http://www.mailcom.com/challenge/ and in my own text compression
>> benchmark.
>>
>>
>>
>> -- Matt Mahoney, matmaho...@yahoo.com
>>
>> *From:* David Jones <davidher...@gmail.com>
>> *To:* agi <agi@v2.listbox.com>
>> *Sent:* Thu, July 22, 2010 3:11:46 PM
>> *Subject:* Re: [agi] Re: Huge Progress on the Core of AGI
>>
>> Because simpler is not better if it is less predictive.
>>
>> On Thu, Jul 22, 2010 at 1:21 PM, Abram Demski <abramdem...@gmail.com>
>> wrote:
>>
>> Jim,
>>
>> Why more predictive *and then* simpler?
>>
>> --Abram
>>
>> On Thu, Jul 22, 2010 at 11:49 AM, David Jones <davidher...@gmail.com>
>> wrote:
>>
>>  An Update....
>>
>> I think the following gets to the heart of general AI and what it takes to
>> achieve it. It also provides us with evidence as to why general AI is so
>> difficult. With this new knowledge in mind, I think I will be much more
>> capable now of solving the problems and making it work.
>>
>> I've come to the conclusion lately that the best hypothesis is better
>> because it is more predictive and then simpler than other hypotheses (in
>> that order.... more predictive... then simpler). But, I am amazed at how
>> difficult it is to quantitatively define more predictive and simpler for
>> specific problems. This is why I have sometimes doubted the truth of the
>> statement.
>>
>> In addition, the observations that the AI gets are not representative of
>> all observations! This means that if your measure of "predictiveness"
>> depends on the number of certain observations, it could make mistakes! So,
>> the specific observations you are aware of may be unrepresentative of the
>> predictiveness of a hypothesis relative to the truth. If you try to
>> calculate which hypothesis is more predictive and you don't have the
>> critical observations that would give you the right answer, you may get the
>> wrong answer! This all depends of course on your method of calculation,
>> which is quite elusive to define.
>>
>> Visual input from screenshots, for example, can be somewhat malicious.
>> Things can move, appear, disappear or occlude each other suddenly. So,
>> without sufficient knowledge it is hard to decide whether matches you find
>> between such large changes are because it is the same object or a different
>> object. This may indicate that bias and preprogrammed experience should be
>> introduced to the AI before training. Either that or the training inputs
>> should be carefully chosen to avoid malicious input and to make them nice
>> for learning.
>>
>> This is the "correspondence problem" that is typical of computer vision
>> and has never been properly solved. Such malicious input also makes it
>> difficult to learn automatically because the AI doesn't have sufficient
>> experience to know which changes or transformations are acceptable and which
>> are not. It is immediately bombarded with malicious inputs.
>>
>> I've also realized that if a hypothesis is more "explanatory", it may be
>> better. But quantitatively defining explanatory is also elusive and truly
>> depends on the specific problems you are applying it to because it is a
>> heuristic. It is not a true measure of correctness. It is not loyal to the
>> truth. "More explanatory" is really a heuristic that helps us find
>> hypothesis that are more predictive. The true measure of whether a
>> hypothesis is better is simply the most accurate and predictive hypothesis.
>> That is the ultimate and true measure of correctness.
>>
>> Also, since we can't measure every possible prediction or every last
>> prediction (and we certainly can't predict everything), our measure of
>> predictiveness can't possibly be right all the time! We have no choice but
>> to use a heuristic of some kind.
>>
>> So, its clear to me that the right hypothesis is "more predictive and then
>> simpler". But, it is also clear that there will never be a single measure of
>> this that can be applied to all problems. I hope to eventually find a nice
>> model for how to apply it to different problems though. This may be the
>> reason that so many people have tried and failed to develop general AI. Yes,
>> there is a solution. But there is no silver bullet that can be applied to
>> all problems. Some methods are better than others. But I think another major
>> reason of the failures is that people think they can predict things without
>> sufficient information. By approaching the problem this way, we compound the
>> need for heuristics and the errors they produce because we simply don't have
>> sufficient information to make a good decision with limited evidence. If
>> approached correctly, the right solution would solve many more problems with
>> the same efforts than a poor solution would. It would also eliminate some of
>> the difficulties we currently face if sufficient data is available to learn
>> from.
>>
>> In addition to all this theory about better hypotheses, you have to add on
>> the need to solve problems in reasonable time. This also compounds the
>> difficulty of the problem and the complexity of solutions.
>>
>> I am always fascinated by the extraordinary difficulty and complexity of
>> this problem. The more I learn about it, the more I appreciate it.
>>
>> Dave
>>
>> *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/> | 
>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>> <http://www.listbox.com> <http://www.listbox.com>
>>
>>
>>
>> --
>> Abram Demski
>> http://lo-tho.blogspot.com/
>> http://groups.google.com/group/one-logic
>>
>> *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/> | 
>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>> <http://www.listbox.com> <http://www.listbox.com>
>>
>>  *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/> | 
>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>> <http://www.listbox.com> <http://www.listbox.com>
>>
>> *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/> | 
>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>> <http://www.listbox.com> <http://www.listbox.com>
>>
>>    *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/> | 
>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>> <http://www.listbox.com>
>>
>
>
>
> --
> Abram Demski
> http://lo-tho.blogspot.com/
> http://groups.google.com/group/one-logic
>    *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to