Re: [agi] Re: Representing Thoughts

2005-09-12 Thread Yan King Yin

 
Will Pearson wrote:
Define what you mean by an AGI. Learning to learn is vital if you wish to try and ameliorate the No Free Lunch theorems of learning. 

 
I suspect that No Free Lunch is not very relevant in practice.  Any learning algorithm has its implicit way of generalization and it may turn out to be good enough.
Having a system learn from the environment is superior than programming it by hand and not be able to learn from the environment.   They are not mutually exclusive. It is superior because humans/genomes have imperfect knowledge of what the system they are trying to program will come up against in the environment.

 
I agree that learning is necessary, like any sensible person would.  The question is how to learn efficiently, and what to learn.  High level mechanisms of thinking can be hard-wired and that would save a lot of time.

It depends what you characterise as learning, I tend to include such things as the visual centres being repurposed to act for audio processing in blind individuals as learning. there you do not have labeled examples. 

 
My point is that unsupervised learning still requires labeled examples eventually.  Your human brain example in not pertinent to AGI because you're talking about a brain that is already intelligent, recruiting extra resources.  We should think about how to build an AGI from scratch.  Then you may realize that unsupervised learning is problematic.

Personally as someone trying to build a system that can be modify itself as much as possible, I am simply following in the footsteps of dealing with the problems that evolution had to deal with when building us. It is all problem solving of sorts (and as such comes under the heading of AI), but dealing with failure, erroneous inputs , energy usage are much more fundemental problems to solve than high level cognition.

 
We do not have to duplicate the evolutionary process.  I think directly programming a general reasoning mechanism is easier.  My approach is to look at how such a system can be designed from an architectural viewpoint.

This I don't agree with. Humans and other animals can reroute things unconsciously, such as switching the visual system to see things upside down  (having placed prisms in front of the eyes 24/7). It takes a while (over weeks), but it then it does happen and I see it as 
evidence for low-level self-modification.
 
Your example is show that experience can alter the brain, which is true.  It does not show that the brain's processing mechanism is flexible -- namely the use of neural networks for feature extraction, classification, etc.  Those mechanisms are fixed.  Likewise, we can directly program an AGI's reasoning mechanisms rather than evolve them.

It can speed up the acquisition of basic knowledge, if the programmer got the assumptions about the world wrong. Which I think is very likely.

 
This is not true.  We know the rules of thinking: induction, deduction, etc, and they are pretty immutable.  Why let the AGI re-learn these rules?
That is all I am trying to do at the moment make tools. Whether they are tools to do what you describe as making a Formula 1 car, I don't know.

 
We need a bunch of researchers to focus on making the first functional AGI.  This requires a lot of determination and not getting distracted by too many theoretical issues.  Which doesn't mean that theory is unimportant.  But we need an attitude that is more practical and down-to-earth.  My observation so far is that a lot of researchers have slight different goals in mind and the result is that we're not really connecting with each other.

 
"Coming together is a beginning; keeping together is progress; working together is success." -- Henry Ford
 
yky



To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]



[agi] [EMAIL PROTECTED]: [alife] breve 2.3: an artificial life simulation environment]

2005-09-12 Thread Eugen Leitl
- Forwarded message from jon klein <[EMAIL PROTECTED]> -

From: jon klein <[EMAIL PROTECTED]>
Date: Mon, 12 Sep 2005 10:26:28 -0400
To: [EMAIL PROTECTED]
Subject: [alife] breve 2.3: an artificial life simulation environment
X-Mailer: Apple Mail (2.733)

Announcing the release of the breve Simulation Environment version 2.3.

 http://www.spiderland.org/breve

breve is an open-source 3D software simulation package for multi-agent
systems, robotics and artificial life research.  This release features
major enhancements to the physical simulation engine, allowing for
faster and more accurate physical simulation.

Features include:

- 3D articulated body physical simulation
- collision detection and response
- rich OpenGL display engine
- easy-to-use integrated scripting language
- extensible plugin architecture
- built-in support for Push, a programming language for
   evolutionary computation (http://hampshire.edu/lspector/push.html)
- runs on Mac OS X, Windows and Linux

___
alife-announce mailing list
[EMAIL PROTECTED]
http://lists.idyll.org/listinfo/alife-announce

- End forwarded message -
-- 
Eugen* Leitl http://leitl.org";>leitl
__
ICBM: 48.07100, 11.36820http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


[agi] Knowledge and learning.

2005-09-12 Thread DGoe


There is a distinction between gathering known knowledge and representing 
that known knowledge within a knowledge base of a given structure and 
learning 

Knowledge is generally facts. 

The "Total Known Knowledge" and  "Machine Known Knowledge" representation 
approach a ratio of 1... 

The ratio is the "Machine Known Knowledge" divided by "Total Known 
Knowledge" 

Real Machine Learning is the deductions-inferences that can be drawn from 
machine known knowledge by the AI programs. Here lies the true learning to 
learn. Rather than the acquisition of known knowledge. 

Quote from a SureTrade Commercial... 
"Insight is critical. 
Everything that you respond to has already taken place. 
Understanding and being able to act on that knowledge is power. 
Fail to do either and you are simply left a witness to history.”

Dan Goe


>From : Yan King Yin <[EMAIL PROTECTED]>
To : agi@v2.listbox.com
Subject : Re: [agi] Re: Representing Thoughts
Date : Mon, 12 Sep 2005 23:09:57 +0800
> Will Pearson wrote:
> 
> Define what you mean by an AGI. Learning to learn is vital if you wish 
to 
> > try and ameliorate the No Free Lunch theorems of learning. 
> 
>  I suspect that No Free Lunch is not very relevant in practice. Any 
learning 
> algorithm has its implicit way of generalization and it may turn out to 
be 
> good enough.
> 
> Having a system learn from the environment is superior than programming 
it 
> > by hand and not be able to learn from the environment. They are not 
mutually 
> > exclusive. It is superior because humans/genomes have imperfect 
knowledge of 
> > what the system they are trying to program will come up against in the 
> > environment.
> 
>  I agree that learning is necessary, like any sensible person would. The 
> question is how to learn efficiently, and *what* to learn. High level 
> mechanisms of thinking can be hard-wired and that would save a lot of 
time. 
> 
> It depends what you characterise as learning, I tend to include such 
things 
> > as the visual centres being repurposed to act for audio processing in 
blind 
> > individuals as learning. there you do not have labeled examples. 
> 
>  My point is that unsupervised learning still requires labeled examples 
> eventually. Your human brain example in not pertinent to AGI because 
you're 
> talking about a brain that is already intelligent, recruiting extra 
> resources. We should think about how to build an AGI from scratch. Then 
you 
> may realize that unsupervised learning is problematic.
> 
> Personally as someone trying to build a system that can be modify itself 
as 
> > much as possible, I am simply following in the footsteps of dealing 
with the 
> > problems that evolution had to deal with when building us. It is all 
problem 
> > solving of sorts (and as such comes under the heading of AI), but 
dealing 
> > with failure, erroneous inputs , energy usage are much more 
fundemental 
> > problems to solve than high level cognition.
> 
>  We do not have to duplicate the evolutionary process. I think directly 
> programming a general reasoning mechanism is easier. My approach is to 
look 
> at how such a system can be designed from an architectural viewpoint.
> 
> This I don't agree with. Humans and other animals can reroute things 
> > unconsciously, such as switching the visual system to see things 
upside down 
> > (having placed prisms in front of the eyes 24/7). It takes a while 
(over 
> > weeks), but it then it does happen and I see it as 
> > evidence for low-level self-modification.
> 
>  Your example is show that experience can alter the brain, which is 
true. It 
> does not show that the brain's processing mechanism is flexible -- 
namely 
> the use of neural networks for feature extraction, classification, etc. 
> Those mechanisms are fixed. Likewise, we can directly program an AGI's 
> reasoning mechanisms rather than evolve them.
> 
> It can speed up the acquisition of basic knowledge, if the programmer 
got 
> > the assumptions about the world wrong. Which I think is very likely.
> 
>  This is not true. We *know* the rules of thinking: induction, 
deduction, 
> etc, and they are pretty immutable. Why let the AGI re-learn these 
rules? 
> 
> That is all I am trying to do at the moment make tools. Whether they are 
> > tools to do what you describe as making a Formula 1 car, I don't know.
> 
>  We need a bunch of researchers to focus on making the first functional 
AGI. 
> This requires a lot of determination and not getting distracted by too 
many 
> theoretical issues. Which doesn't mean that theory is unimportant. But 
we 
> need an attitude that is more practical and down-to-earth. My 
observation so 
> far is that a lot of researchers have slight different goals in mind and 
the 
> result is that we're not really connecting with each other.
>  "Coming together is a beginning; keeping together is progress; working 
> together is success." -- Henry Ford
>  yky

Re: [agi] Knowledge and learning.

2005-09-12 Thread Yan King Yin

Real Machine Learning is the deductions-inferences that can be drawn frommachine known knowledge by the AI programs. Here lies the true learning to
learn. Rather than the acquisition of known knowledge.
 
Yes, I think the central issue is how an AGI can derive higher-level knowledge from existing knowledge or experience.  For example, from seeing 10 red apples and concluding that all apples are 
red.  We should enumerate all the "inference rules" whereby new knowledge can be generated from existing knowledge.The inference rules can be fixed.  What is flexible is the set of knowledge in the AGI.  This set can get increasingly abstract as the AGI repeatedly derive new knowledge from the existing base.

 
yky



To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]



RE: [agi] Knowledge and learning.

2005-09-12 Thread Ben Goertzel



 
Fixing 
the inference rules seems to be OK, based on my experience with Novamente's PTL 
inference system.   And fixing the overall inference control strategy 
seems to be OK.  But context-specific inference control schemata (working 
within this overall framework) need to be learned, IMO, otherwise real-world 
inference cannot tractably work...
 
ben

  -Original Message-From: [EMAIL PROTECTED] 
  [mailto:[EMAIL PROTECTED]On Behalf Of Yan King 
  YinSent: Monday, September 12, 2005 1:01 PMTo: 
  agi@v2.listbox.comSubject: Re: [agi] Knowledge and 
  learning.
  
  Real 
Machine Learning is the deductions-inferences that can be drawn 
frommachine known knowledge by the AI programs. Here lies the true 
learning to learn. Rather than the acquisition of known 
  knowledge.
   
  Yes, I think the central issue is how an AGI can derive higher-level 
  knowledge from existing knowledge or experience.  For example, from 
  seeing 10 red apples and concluding that all apples 
  are red.  We should enumerate all the 
  "inference rules" whereby new knowledge can be generated from existing 
  knowledge.The inference rules can be fixed.  What is flexible 
  is the set of knowledge in the AGI.  This set can get increasingly 
  abstract as the AGI repeatedly derive new knowledge from the existing base. 
  
   
  yky
  
  
  To unsubscribe, change your address, or temporarily deactivate your 
  subscription, please go to 
  http://v2.listbox.com/member/[EMAIL PROTECTED]
  


To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]




[agi] Re: Representing Thoughts

2005-09-12 Thread William Pearson
On 9/12/05, Yan King Yin <[EMAIL PROTECTED]> wrote:
> Will Pearson wrote:
> 
> Define what you mean by an AGI. Learning to learn is vital if you wish to 
> > try and ameliorate the No Free Lunch theorems of learning. 
> 
>  I suspect that No Free Lunch is not very relevant in practice. Any learning
> 
> algorithm has its implicit way of generalization and it may turn out to be 
> good enough.

I suspect that it will be quite important in competition between
agents. If one agent has a constant method of learning it will be more
easily predicted by an agent that can figure out its constant method
(if it is simple). If it changes  (and changes how it changes), then
it will be less predictable and may avoid other agents exploiting it.

> Having a system learn from the environment is superior than programming it 
> > by hand and not be able to learn from the environment. They are not
> mutually 
> > exclusive. It is superior because humans/genomes have imperfect knowledge
> of 
> > what the system they are trying to program will come up against in the 
> > environment.
> 
>  I agree that learning is necessary, like any sensible person would. The 
> question is how to learn efficiently, and *what* to learn. High level 
> mechanisms of thinking can be hard-wired and that would save a lot of time.

They can also be what I think of as soft wired. Programmed but also
allowed to be altered by other parts of the system.

> It depends what you characterise as learning, I tend to include such things
> 
> > as the visual centres being repurposed to act for audio processing in
> blind 
> > individuals as learning. there you do not have labeled examples. 
> 
>  My point is that unsupervised learning still requires labeled examples 
> eventually.

If you include such things as reward in labelling, and self-labeling,
then I agree. I would like to call the feeling where I don't want to
go to bed because of too much coffee 'the jitterings', and I be able
to learn that.

> Your human brain example in not pertinent to AGI because you're
> 
> talking about a brain that is already intelligent,

But the sub parts of the brain aren't intelligent surely? Only the
whole is? You didn't define intelligence so I can't determine where
our disconnect lies.

> recruiting extra 
> resources. We should think about how to build an AGI from scratch. Then you
> 
> may realize that unsupervised learning is problematic.

I work with a strange sort of reinforcement learning of sorts as a
base. You can layer whatever sort of learning you want on top of it, I
would probably layer supervised learning on it if the system was going
to be social. Which would probably be needed for something to be
considered AGI.
 
> 
>  We do not have to duplicate the evolutionary process.

I am not saying we should imitate a flatworm then a mouse then a bird
etc. I am saying that we should look at the problem classes solved by
evolution at first, and then see how we would solve them with silicon.
This would hopefully keep us on the straight and narrow and not
diverge into a little intellectual cul-de-sac

> I think directly 
> programming a general reasoning mechanism is easier. My approach is to look
> 
> at how such a system can be designed from an architectural viewpoint.

Not my style, but you may produce something I find interesting. 

> This I don't agree with. Humans and other animals can reroute things 
> > unconsciously, such as switching the visual system to see things upside
> down 
> > (having placed prisms in front of the eyes 24/7). It takes a while (over 
> > weeks), but it then it does happen and I see it as 
> > evidence for low-level self-modification.
> 
>  Your example is show that experience can alter the brain, which is true. It
> 
> does not show that the brain's processing mechanism is flexible -- namely 
> the use of neural networks for feature extraction, classification, etc. 
> Those mechanisms are fixed. 

Saying that because the brain uses neurons to classify things, those
methods of classification are fixed, is like saying because a Pentiums
uses transistors to compute things and they are fixed, what a pentium
can compute is fixed.

Also if all neurons do is feature extraction/classification etc how
can we as humans reason and cogitate?

> Likewise, we can directly program an AGI's 
> reasoning mechanisms rather than evolve them.

Once again, I have never said anything about not programming the
system as much as possible.

> It can speed up the acquisition of basic knowledge, if the programmer got 
> > the assumptions about the world wrong. Which I think is very likely.
> 
>  This is not true. We *know* the rules of thinking: induction, deduction, 
> etc, and they are pretty immutable. Why let the AGI re-learn these rules?

Induction, deduction we know. However there are many things we don't
know. For example getting information from other humans is an
important part of reasoning. Which humans we should trust, who may be
out to fool us, we