On Thu, Aug 7, 2008 at 1:37 AM, Harry Chesley <[EMAIL PROTECTED]> wrote:
>
> Generally, I agree. However, rote learning can be a part of modeling. We
> learn arithmetic by rote, but then apply it to non-rote models, for example.
> Rote learning can provide parts of the model. Taken to extremes (as in an AI
> program), rote can conceivably provide everything.

In this form, it sounds to me equivalently to "evidence is needed for
learning" or something like that. What is the distinction you are
making? Fuzzy shaky general model vs. clear-cut general model?


>>  A book is a "request for understanding", it can be converted into a
>>  model if read by someone. I think about meaning as a target of
>>  optimization process permitted by a given model of environment. When
>>  you have a question, it creates a process of arriving at an answer,
>>  and so the meaning of this question is in the shape of your activity
>>  about finding the answer, in the target of this process. If it is
>>  expected that a book gets read, it is a part of optimization process
>>  in the model that anticipates that. If book is currently burning, and
>>  is expected to be reduced to ashes, it is not a part of such process
>>  and it has no understanding or meaning relevant to what's written in
>>  it.
>
> Here and above, I think you need to distinguish between understanding and
> expressing or using understanding. You seem to be saying that understanding
> exists only when being expressed or used, and I wouldn't agree with that,
> though the point is subtle enough that it probably doesn't matter, since
> unused understanding is functionally irrelevant.

Not exactly. Generality is only needed to account for uncertainty. You
need a general model that is good for many situations only if you
don't know which of these situations will actually occur. Thus, most
of the generality always remains unused, generality applies not to how
the model is used but to how it's _expected_ to be used. Each part of
the overall model of environment needs to apply in contexts permitted
by the rest of the model.


> You say "a book...can be converted into a model if read by someone," but
> what does reading do other than convert from one representation (printed
> words) to another (neural connections). (It also presumably connects the new
> knowledge to previously acquired knowledge, but that prior knowledge /could/
> have been in the book too.) The only difference is that the new
> representation is more ready to be used.
>
> Then you get asked a question and the neural mechanism goes to work and uses
> the knowledge to produce an answer showing your understanding. But you still
> had the understanding before you used it, and you still have it now even
> though you're not using that part of your brain at the moment.
>

I think that understanding that is known to never apply is in some
sense no longer "understanding". It often has an aura of free will
about it ( http://www.overcomingbias.com/2008/06/possibility-and.html
), suggesting that it could be possible to create a situation in which
it applies, but it is a separate problem, one of unpredictability of
action ( http://causalityrelay.wordpress.com/2008/08/04/causation-and-actions/
). It might indeed turn out to be possible, or you might not know
enough to be certain that situation in which understanding applies
won't occur. But if you know that it won't apply, I think it becomes
meaningless, if meaning is considered a property of control algorithm
(and I consider it also a good match, since generality resides in the
model, not in the environment, and generality is the main feature of
understanding).

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com

Reply via email to