On 9/12/05, Yan King Yin <[EMAIL PROTECTED]> wrote:
> Will Pearson wrote:
> 
> Define what you mean by an AGI. Learning to learn is vital if you wish to 
> > try and ameliorate the No Free Lunch theorems of learning. 
> 
>  I suspect that No Free Lunch is not very relevant in practice. Any learning
> 
> algorithm has its implicit way of generalization and it may turn out to be 
> good enough.

I suspect that it will be quite important in competition between
agents. If one agent has a constant method of learning it will be more
easily predicted by an agent that can figure out its constant method
(if it is simple). If it changes  (and changes how it changes), then
it will be less predictable and may avoid other agents exploiting it.

> Having a system learn from the environment is superior than programming it 
> > by hand and not be able to learn from the environment. They are not
> mutually 
> > exclusive. It is superior because humans/genomes have imperfect knowledge
> of 
> > what the system they are trying to program will come up against in the 
> > environment.
> 
>  I agree that learning is necessary, like any sensible person would. The 
> question is how to learn efficiently, and *what* to learn. High level 
> mechanisms of thinking can be hard-wired and that would save a lot of time.

They can also be what I think of as soft wired. Programmed but also
allowed to be altered by other parts of the system.

> It depends what you characterise as learning, I tend to include such things
> 
> > as the visual centres being repurposed to act for audio processing in
> blind 
> > individuals as learning. there you do not have labeled examples. 
> 
>  My point is that unsupervised learning still requires labeled examples 
> eventually.

If you include such things as reward in labelling, and self-labeling,
then I agree. I would like to call the feeling where I don't want to
go to bed because of too much coffee 'the jitterings', and I be able
to learn that.

> Your human brain example in not pertinent to AGI because you're
> 
> talking about a brain that is already intelligent,

But the sub parts of the brain aren't intelligent surely? Only the
whole is? You didn't define intelligence so I can't determine where
our disconnect lies.

> recruiting extra 
> resources. We should think about how to build an AGI from scratch. Then you
> 
> may realize that unsupervised learning is problematic.

I work with a strange sort of reinforcement learning of sorts as a
base. You can layer whatever sort of learning you want on top of it, I
would probably layer supervised learning on it if the system was going
to be social. Which would probably be needed for something to be
considered AGI.
 
> 
>  We do not have to duplicate the evolutionary process.

I am not saying we should imitate a flatworm then a mouse then a bird
etc. I am saying that we should look at the problem classes solved by
evolution at first, and then see how we would solve them with silicon.
This would hopefully keep us on the straight and narrow and not
diverge into a little intellectual cul-de-sac

> I think directly 
> programming a general reasoning mechanism is easier. My approach is to look
> 
> at how such a system can be designed from an architectural viewpoint.

Not my style, but you may produce something I find interesting. 

> This I don't agree with. Humans and other animals can reroute things 
> > unconsciously, such as switching the visual system to see things upside
> down 
> > (having placed prisms in front of the eyes 24/7). It takes a while (over 
> > weeks), but it then it does happen and I see it as 
> > evidence for low-level self-modification.
> 
>  Your example is show that experience can alter the brain, which is true. It
> 
> does not show that the brain's processing mechanism is flexible -- namely 
> the use of neural networks for feature extraction, classification, etc. 
> Those mechanisms are fixed. 

Saying that because the brain uses neurons to classify things, those
methods of classification are fixed, is like saying because a Pentiums
uses transistors to compute things and they are fixed, what a pentium
can compute is fixed.

Also if all neurons do is feature extraction/classification etc how
can we as humans reason and cogitate?

> Likewise, we can directly program an AGI's 
> reasoning mechanisms rather than evolve them.

Once again, I have never said anything about not programming the
system as much as possible.

> It can speed up the acquisition of basic knowledge, if the programmer got 
> > the assumptions about the world wrong. Which I think is very likely.
> 
>  This is not true. We *know* the rules of thinking: induction, deduction, 
> etc, and they are pretty immutable. Why let the AGI re-learn these rules?

Induction, deduction we know. However there are many things we don't
know. For example getting information from other humans is an
important part of reasoning. Which humans we should trust, who may be
out to fool us, we don't.

Another thing we can't specify completely in advance is the frame
problem. Or how to deal with faulty input (if we have a electrical
storm that interferes with our AGI, what how would it now the inputs
were faulty?).

One last thing we don't know how to deal with is the forgetting
problem. What data should we forget? How do we determine which is the
least important?

> That is all I am trying to do at the moment make tools. Whether they are 
> > tools to do what you describe as making a Formula 1 car, I don't know.
> 
>  We need a bunch of researchers to focus on making the first functional AGI.
> 
> This requires a lot of determination and not getting distracted by too many
> 
> theoretical issues. Which doesn't mean that theory is unimportant. But we 
> need an attitude that is more practical and down-to-earth. My observation so
> 
> far is that a lot of researchers have slight different goals in mind and the
> 
> result is that we're not really connecting with each other.

This is definately true.  But how do you convince other people to come
to your flag. Argumentation, as I have discovered many times doesn't
help :) The only way to get any sort of consensus is to make it as
much like science as possible and make the impartial world the judge
of the fitness of your ideas,

If you do adopt science, and wish to be able to convince people you
are on the right track before you make an AGI, you would have to
making theories of how the brain works that can be tested. As that is
the only example of an intelligent system we have at the moment.

If you don't adopt science, we are engineering, and we can't tell
whether the system we are engineering will actually be AGI when we
complete it.

If people are interested I can put my theories where my mouth is and
give you some predictions about the brain. Not about intelligence per
se, but based on guesses about what type of system some parts of the
vertabrate brain are.

>  "Coming together is a beginning; keeping together is progress; working 
> together is success." -- Henry Ford


  Will Pearson

-------
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to