hi,
***
It seems to me that conceptual
difference between vision & language is in the level of generalization,
aside from different sensor/actuator orientation. Social Interaction? Once you start coding things that
are learnable, where do you stop before ending up with just another expert
system?Isn't this all about scalable
learning, which should develop environmentally specific functional
specialization on it's own?
***
Well, in Novamente we are not coding *specific
knowledge* that is learnable... but we are coding implicit knowledge
as to what sorts of learning processes are most useful in which specialized
subdomains...
***
What single algorithm? How do you evaluate
'dealing'? How do you derive/select you algorithms for
unknown inputs without first
quantitatively defining your objectives? You definition of intelligence doesn't
seem to be functional to me, goals can't be defined solely by their
complexity.
Without deductive
derivation we are stuck with trial & error, which can take
millenia.
***
The Novamente design
is mathematically formulated, but not mathematically derived. That is,
individual formulas used in the system are mathematically derived, but the
system as a whole has been designed by intuition (based on integrating a lot of
different ideas from a lot of different domains) rather than by formal
derivation.
In my view, we are
nowhere near possessing the right kind of math to derive a realistic AI design
from definitions in a rigorous way. Juergen Schmidhuber's OOPS system is
an attempt in this direction, but though I like Juergen's work, I think this
design is too simplistic to be a functional
AGI.
Maybe further
work in the OOPS direction will yield something like what you're
suggesting...
***
Also, the reason human learning is so slow
is 'hardware' - specific: it takes a lot longer to build new connections than to
access them. That's not the case for computer hardware.
***
I don't think you're right about
the reason human learning is so slow. It is not just hardware
inefficiency, it is the fact that a lot of trial-and-error-based algorithms are
used in the brain.
*** That's true,
but human brain is an accident of incremental & obviously unfinished
evolution, not some grand design. Besides, I think to some extent these
different areas are specialized not so much by genetic design but by the impact
of the input types they recieve. In any case, you must admit, this stone age
'design' doesn't perform very well now & it will get worse as the
changes accelerate.
***
The human brain has many flaws and
is not a perfect guide for AGI, but it has far more general intelligence than
any existing computer program, and so it is certainly worth carefully studying
when designing a would-be AGI system.
Novamente is intended to ultimately go beyond what the
human brain can accomplish, but for version 1 we'll be contented to achieve
human-level general intelligence ;-)
-- Ben
Goertzel
|
- [agi] Intelligence by definition Boris Kazachenko
- RE: [agi] Intelligence by definition Ben Goertzel
- Re: [agi] Intelligence by definition Boris Kazachenko
- Re: [agi] Intelligence by definition Ben Goertzel
- Re: [agi] Intelligence by definition Boris Kazachenko
- RE: [agi] Intelligence by definitio... Ben Goertzel
- [agi] Fearless prediction. Alan Grimes
- Re: [agi] Intelligence by defin... Boris Kazachenko
- Re: [agi] Intelligence by definition RSbriggs
- [agi] Chess Master Theory Of AGI. Mike Deering
- Re: [agi] Chess Master Theory Of AGI. Cliff Stabbert
- [agi] Diminished impact of Moore's Law o... Gary Miller
- RE: [agi] Diminished impact of Moor... Ben Goertzel
- Re: [agi] Diminished impact of Moor... James Rogers