Hi all,
I'm away from home (at a bio conference in Australia) so my email access is
sporadic, so it's a bad time for me to start list discussions, but I feel like
it anyway, so here goes ;-)
I've been talking with Pei Wang about whether "computation" is a good
concept for modeling or describing AI systems. He says it's not. I
tend to agree with him.
Here is how I am thinking about the issue...
The computers on which we build our AI/AGI software
are specific, finite physical systems, and the software programs we build are
specific, finite sets of code instructions we feed to these specific physical
systems.
Then, we can choose to *abstract* these physical
systems by considering them as approximations to certain ideal mathematical
systems. For instance, a Turing machine is an ideal mathematical system,
and a real computer can be considered as an approximation to a Turing machine
(an approximation because a real computer doesn't really have an infinite memory
tape).
However, there is a question whether this
abstraction/approximation is of any use for studying
intelligence.
What if we think about the "amount of memory that
can be accessed per second" associated with a given hardware device H,
and call this M(H). Then if we place a practical bound on M(H),
and take into account special relativity, we can no longer talk about
ordinary computers being equivalent to Turing machines with infinite tapes, and
we can no longer talk about bisimulation based equivalency between an
arbitrary pair of real computers.
So the question is begged, how do the
structures/dynamics required to achieve useful goal-achieving behavior using H
depend on M(H).
Then one must argue that they do depend on M(H), in
the sense that the useful-for-goal-achieving structures/dynamics for
moderate M(H) [such as Novamente or NARS] are quite different from the
useful-for-goal-achieving structures/dynamics for extremely large M(H) [such as
AIXI or the Godel Machine].
So the problem is that the mapping "intelligent
system = universal computer" relies on ignoring bounds on M(H), but it seems
almost certain that the optimal ways of achieving useful goal-seeking behavior
are strongly dependent on M(H).
(recall my definition of intelligence as "achieving complex goals in
complex environments")
thoughts? reactions? insults?
ben
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED] |
- [agi] Is "computation" a good concept for describi... Ben Goertzel
- Re: [agi] Is "computation" a good concept for... Lukasz Kaiser
- [agi] Hard Takeoff Modules? DGoe
- Re: [agi] Hard Takeoff Modules? Ben Goertzel
- Re: [agi] Is "computation" a good concept... Ben Goertzel
- Re: [agi] Is "computation" a good con... Lukasz Kaiser
- Re: [agi] Is "computation" a good... Pei Wang
- Re: [agi] Is "computation" a... Ben Goertzel
- Re: [agi] Is "computation" a good... Ben Goertzel
- Re: [agi] Is "computation" a good concept for... Pei Wang