I certainly don't have the ability to predict what any of you are
going to say next. And that is relevant. Suppose I finished this
paragraph but did not post it right away. I can tell you now that you
would not be able to predict what I am going to say next. (You can try
it anytime you might want to get more insight into what your brain is
doing.) You might be able to get a point which looked like it was
going in the same direction of one of my comments but most of your
predictions would be way off. The problem is reconciling this kind of
experiment to the view that 'prediction' is a central aspect of
'understanding'. The only way you can get tune your ability to predict
what someone is going to say next to make it anywhere close to what
someone actually does say next is by roughing it out in more broader
generalizations. (If you tried the experiment you would see what I
mean.) These generalizations and collections of possibilities that you
might not be able to actually predict but are ready to be drawn to
consciousness when matched to the (mental) analysis of a situation may
be called 'expectations'.  But I don't think the word, 'prediction'
really cuts it. Unfortunately, I then find myself confronted with the
Frame Problem. The problem might then be explaining how it is that you
are able to 'predict' what is unlikely to occur. The answer might be
that narrower, less general events are more unlikely to occur, and
that broader generalizations of expected events are a little safer as
long as you can figure out how to generalize (what would be) the
reasonable expectations (if they had actually been cued as
expectations.) Part of my view on relativism is that almost anything
could be made more general or more specific and there are many ways
relative to some subject matter that this could be done. So, for
example, if I think that Matt drives a black car, I might generalize
the prediction by revising it to Matt probably does not drive a
brightly colored car. If I say that Matt probably does not drive a
white car my chances of getting it right will jump because even though
it is more specific it comes out, because of the negation, as more
general.

I  think an algorithm that employs Solomonoff Induction would be a
learning algorithm but it would not be a complete AGI Learning
Algorithm because it would be too narrow. Although it is not narrowly
specific the fact that it cannot detect alternative ways of referring
to a subject without extensive reworking of the concept shows that it
not truly general. And it is certainly not effectively equivalent to
every kind of learning algorithm. So a principle that is derived from
something like Solomonoff Induction would not automatically rise to be
rightfullyy declared as a principle that refers to all learning
algorithms.

Finally, we do have the ability to change our utility function
evaluations. Even if you tried to rhetorically shape a work around,
like saying that we can employ different points of view while
evaluating a point and that can explain the employment of different
utility function evaluations, the problem is so pervasive and
fundamental to the understanding of 'understanding' that it really
indicates that a method which proclaims that utility functions are
fixed (even relative to some relatively explicit selection) is  an
outdated point of view. Like thinking that logic is the language of
(all) thought.
Jim Bromer


On Thu, Nov 27, 2014 at 10:35 PM, Matt Mahoney via AGI <[email protected]> wrote:
> On Wed, Nov 26, 2014 at 9:32 PM, Logan Streondj via AGI <[email protected]> 
> wrote:
>> but I wouldn't say predict.
>
> If you prefer to call it motion estimation or planning or adaptation,
> then call it that. I am just trying to agree on the meanings of some
> words so we can communicate effectively. Can you agree that these
> tasks all involve assigning probabilities to events that you haven't
> observed yet?
>
>>> > You
>>> > are able to understand my words because you can predict a large
>>> > fraction of them and only need to remember the differences.
>>
>> that just sounds like a nonsensical statement.
>
> If you disagree, then try taking some of my words and scrambling them
> in random order and see which sentence is easier to remember. Then try
> scrambling the letters in random order. Do you see that the task
> becomes progressively harder because you are not able to predict the
> next word or next letter?
>
>> I do a fair amount of meditation and meta-thought analysis.
>
> Have you written any language modeling or AI software? It might give
> you a little more insight into what your brain is doing.
>
> --
> -- Matt Mahoney, [email protected]
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/24379807-653794b5
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to