On 5/17/07, Shane Legg <[EMAIL PROTECTED]> wrote:

> This just shows the complexity of "the usual meaning of the word
> intelligence" --- many people do associate with the ability of solving
> hard problems, but at the same time, many people (often the same
> people!) don't think a brute-force solution show any intelligence.

I think this comes from the idea people have that things like intelligence
and creativity must derive from some very clever process.  A relatively
dumb process implemented on a mind blowingly vast scale intuitively
doesn't seem like it could be sufficient.

I think the intelligent design movement gets its strength from this
intuition.
People think, "How could something as complex and amazing as the human
body and brain come out of not much more than random coin flips?!?!?"
They figure that the algorithm of evolution is just too simple and therefore
dumb to do something as amazing as coming up with the human brain.
Only something with super human intelligence could achieve such a thing.

The solution I'm proposing is that we consider that relatively simple rules
when implemented on sufficiently vast scales can be very intelligent.  From
this perspective, humans are indeed the product of intelligence, but the
intelligence isn't God's, its a 4 billion year global scale evolutionary
process.

You have mixed two very different topics: (1) whether human
intelligence is designed (or whether evolution is powerful enough to
produce intelligence) and (2) whether intelligence can be designed.

We have no disagreement on the former, and we are discussing the
latter. I assuming you are not arguing that evolution is not the only
way to produce intelligence, or that we can start an evolution process
then wait for it to produce intelligence.

> At this point, you see "capability" as more essential, while
> I see "adaptivity" as more essential.

Yes, I take capability as primary.  However, adaptivity is implied
by the fact that being adaptable makes a system more capable.

Not always. A brute-force system doesn't adapt, though it can be very
powerful in certain situations, as you suggested.

> today, conventional computers
> solve many problems better than the human mind, but I don't take that
> as reason for them to be more intelligent.

The reason for that, I believe, is because the set of problems that they
can solve is far too narrow.  If they were able to solve a very wide range
 of problems, through brute force or otherwise, I would be happy to call
them intelligent.  I suspect that most people, when faced with a machine
that could solve amazingly difficult problems, pass a Turing test, etc...,
would refer to the machine as being intelligent.  They wouldn't really care
if internally it was brute forcing stuff by running some weird quantum XYZ
system that was doing 10^10^1000000 calculations per second.  They
would simply see that the machine seemed to be much smarter than
themselves and thus would say it was intelligent.

I see your point, and agree that it is why people associate
intelligence with capability, though I think this path will miss more
valuable aspects of the notion.

> for most people, that will
> happen only when my system is producing results that they consider as
> impressive, which will not happen soon.

Speaking of which, you're been working on NARS for 15 years!

Actually it is longer than that.

As the theory of NARS is not all that complex (at least that was my
impression after reading you PhD thesis and a few other papers),
what's the hold up.  Even working part time I would have thought
that 15 years would have been enough to complete the system
and demonstrate its performance.

The technical design of NARS is indeed simpler than most of the other
AGI projects, but this very approach actually make the conceptual
design harder --- Given my theoretical assumption, I cannot base my
work on existing theories, but have to do almost everything by my own.
I cannot integrate an existing technique as a module into NARS to make
it more powerful, but must unify new ideas into the single technique
to cover larger and larger area. Many people think this is impossible,
though I haven't been stopped by any problem yet. I've been mainly
following my road map --- if you read my recent book, you can see how
much I've achieved since my PhD thesis. The progress is not as fast as
I wish, but given the resources spent on the project, I don't think
NARS has less process than any other AGI projects --- as far as
"progress" is not measured by the number of lines in source code.

I'll check where you are in your research after 15 years. ;-)

In Ben's case I understand that psynet/webmind/novamente have
all be fairly different to each other and complex.  So I understand
why it takes so long.  But NARS seems to be much simpler and
the design seems more stable over time?

The design of NARS has been relatively stable, though new
functionality has been added into it from year to year. You can see my
road map in http://nars.wang.googlepages.com/wang.roadmap.pdf

> > It seems to me that what you are defining would be better termed
> > "intelligence efficiency" rather than "intelligence".
>
> What if I suggest to rename your notion "universal problem solver"?  ;-)

To tell the truth, I wouldn't really mind too much!  After all, once a
sufficiently powerful all purpose problem solver exists I'll simply ask
it to work out what the best way to define intelligence is and then
ask it to build a machine according to this definition.

I don't think that will happen --- it remind me "Deep Thought" in "The
Hitchhiker's Guide to the Galaxy".

> but I really don't see how you can put the current AGI projects, which
> are as diverse one can image, into the framework you are proposing. If
> you simply say that the one that don't fit in are uninteresting to
> you, the others can say the same to your framework, right?

Sure, they might not want to build something that is able to achieve an
extremely wide range of goals in an extremely wide range of environments.

No. If they are similar to me, it will be because they won't assume
sufficient knowledge and resources in this process.

All I'm saying is that this is something that is very interesting to me,
and that it also seems like a pretty good definition of "intelligence".

Fully understand --- everyone has this feeling for their working definition.

Pei

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to