> We may be misinterpreting each other. What I mean by learning being
> necessary for intelligence is that a system that cannot learn is not
> intelligent. Unless you posit some omnipotent, omniscient entity. Not
> that a system must learn before it becomes intelligent.
> 
> > What is the minimal internal state it would need to
> > start with if any? Is the system, before any input, intelligent?
> 
> I'm not sure what you are getting at here. I am tempted to answer
> with, "Can a plane, before it has left the ground, fly?"

I am trying to understand what intelligence is at its smallest definable
level, mathematically. What is the minimalistic intelligence machine? Are
there non intelligent entities that need to be combined to form
intelligence? What exactly is it?

> 
> > There could
> > be a very simple mathematical definition of intelligence.
> >
> This is also a bit opaque to me, are you talking about a definition on
> the ability to solve problems or a mathematical definition of the
> internal structure/dynamics? One I think possible, the other... not so
> much.

I think that they are related. Problems require intelligence to solve. 

I'm sure definitions can be arrived at using pancomputationalism and data
physics perspectives...

John

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=55573861-86d2b7

Reply via email to