On 27/09/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:
William Pearson wrote:
> I am interested in meta-learning voodoo, so I thought I would add my
> view on KR in this type of system.
>
> If you are interested in meta-learning the KR you have to ditch
> thinking about knowledge as the lowest level  of changeable
> information in your system, and just think about changing state. State
> is related to knowledge in that states can represent knowledge. The
> difference lies in the fact that you can't change the knowledge to
> change the knowledge representation, however you can change state to
> change the knowledge representation.
>
> We do this when we program computers with different KR for example.
> You can't call the low level bits and bytes of a computer a KR,
> because they are not intrinsically about any one thing, they are just
> state.
>
> Of course with this meta-learning situation, you do have to give an
> initial KR to be modified and improved upon, so discussion of the
> initial KR is still interesting.

I can't quite understand what you are saying.

"Metalearning", the way I would use it (if I did, which is not very
often) means some adaptive process to find learning mechanisms > that
actually work.

I am using in a closely related sense. Start with a learning mechanism
and alter that learning mechanism and knowledge representation to the
specific problems the system faces, with the information available to
it. Since I believe that this sort of meta-learning occurs in humans,
I think that a physicist has different learning mechanisms and
knowledge representations than a racing car driver, when thinking
about a cars movement. Despite starting with similar knowledge
representations and mechanisms at birth. And similarly there will be
differences in knowledge representations for a beginning chess player
and a master.

You could test this by looking at the brain regions that people use
when solving problems. If this changes on a person by person basis,
then it is likely that some form of meta-learning of the sort I am
interested in occurs in humans.

This is really a methodological issue, about the
procedures we set up to find adequate learning mechnanisms, not a
run-time issue for the AGI.

In your view of AI maybe. Not mine though.

Thus:  I see a class of KR systems being defined, each member of which
differs from others in some more-or-less parameterized way, and then a
systematic exploration of the properties (mostly the stability and
generative power) of the members of that class.

Most importantly, I do not see us going back to some extremely
impoverished "blank slate" class of systems when we start this empirical
process:  I specifically want to see a base system that captures the
best knowledge we have about the human cognitive system.


I agree to a certain extent. A blank slate view is not appropriate.
And if you are trying to do exactly the same thing as humans, then
capturing the knowledge is the best way forward. However different
modalities may require different intitial knowledge represntations.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to