On 9/9/05, Yan King Yin <[EMAIL PROTECTED]> wrote:

> "learning to learn" which I interpret as applying the current knowledge 
> rules to the knowledge base itself. Your idea is to build an AGI that can 
> modify its own ways of learning. This is a very fanciful idea but is not the
> 
> most direct way to build an AGI. Instead of building an AGI you're trying to

Define what you mean by an AGI. Learning to learn is vital if you wish
to try and ameliorate the No Free Lunch theorems of learning.

> build something that can learn to become an AGI. Unfortunately, this 
> approach is computationally *inefficient*.
>  You seem to think that letting an AGI *learn* from its environment is 
> superior than programming it by hand. In reality, learning is not magic. 1.

Having a system learn from the environment is superior than
programming it by hand and not be able to learn from the environment.
They are not mutually exclusive. It is superior because humans/genomes
have imperfect knowledge of what the system they are trying to program
will come up against in the environment.

> It takes time. 2. It takes supervision (in the form of labeled examples). 

It depends what you characterise as learning, I tend to include such
things as the visual centres being repurposed to act for audio
processing in blind individuals as learning. there you do not have
labeled examples.

> Because of these two things, programming an AGI by hand is not necessarily 
> dumber than building an AGI that can learn.

So I look at learning systems and what I can learn from them...

>  But of course we cannot have a system that is totally rigid. To be 
> practical, we need to have a flexible system that can learn and that can 
> also be programmed.
>  In summary I think your problem is that you're not focusing on building an
> 
> AGI *efficiently*. Instead you're fantasizing about how the AGI can improve
> 
> itself once it is built.

Personally as someone trying to build a system that can be modify
itself as much as possible, I am simply following in the footsteps of
dealing with the problems that evolution had to deal with when
building us. It is all problem solving of sorts (and as such comes
under the heading of AI), but dealing with failure, erroneous inputs ,
energy usage are much more fundemental problems to solve than high
level cognition.

> The ability of an AGI to modify itself is not 
> essential to building an AGI efficiently. Nor can it help the AGI to learn 
> its basic knowledge faster. Self modification of an AGI will only happen 
> after it has acquired at least human-level knowledge.

This I don't agree with. Humans and other animals can reroute things
unconsciously, such as switching the visual system to see things
upside down  (having placed prisms in front of the eyes 24/7). It
takes a while (over weeks), but it then it does happen and I see it as
evidence for low-level self-modification.

> It is just a fantasy 
> that self-modification can *speed up* the acquisition of basic knowledge. 

It can speed up the acquisition of basic knowledge, if the programmer
got the assumptions about the world wrong. Which I think is very
likely.

> The difference would be like driving an ordinary car and a Formula-1, in the
> 
> city area =) Not to mention that we don't possess the tools to make the 
> Formula-1 yet.

That is all I am trying to do at the moment make tools. Whether they
are tools to
do what you describe as making a Formula 1 car, I don't know.

Will Pearson

-------
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to