> If it was just a matter of writing the code, then it would have been done
> 50 years ago.
if proving Fermat's Last theorem was just a matter of doing math, it would
have been done 150 years ago ;-p
obviously, all hard problems that can be solved have already been solved...
???
--
Logic has not solved AGI because logic is a poor model of the way people think.
Neural networks have not solved AGI because you would need about 10^15 bits of
memory and 10^16 OPS to simulate a human brain sized network.
Genetic algorithms have not solved AGI because the computational requiremen
All of the major AI paradigms, including those that are capable of
learning, are flat according to my definition. What makes them flat
is that the method of decision making is minimally-structured and they
funnel all reasoning through a single narrowly focused process that
smushes different inputs
Steve,
Dp/dt methods do not fundamentally change the space of possible models
(if your initial mathematical claim of equivalence is true). What I am
saying is that that model space is *far* too small. Perhaps you know
some grammar theory? Markov models are not even as expressive as
regular grammar
Abram,
On 1/6/09, Abram Demski wrote:
>
> Well, I *still* think you are wasting your time with "flat"
> (propositional) learning.
I'm not at all sure that I understand what you are saying here, so some
elaboration is probably in order.
I'm not saying there isn't still progress to
> be made in