oops, i meant 1895 ... damn that dyslexia ;-) ... though the other way was
funnier, it was less accurate!!
On Sat, Oct 11, 2008 at 8:55 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
I'm only pointing out something everybody here knows full well:
embodiment in various forms has, so far, failed
I'm only pointing out something everybody here knows full well:
embodiment in various forms has, so far, failed to provide any real help in
cracking the NLU problem. Might it in the future? Sure. But the key word
there is might.
To me, you sound like a guy in 1985 saying So far, wings
Dave,
Well, I thought I'd described how pretty well. Even why. See my recent
conversation with Dr. Heger on this list. I'll be happy to answer specific
questions based on those explanations but I'm not going to repeat them
here. Simply haven't got the time.
Although I have not been
Hi Brad,
An interesting point of conceptual agreement between OCP and Texai designs
is that very specifically engineered bootstrapping processes are necessary
to push into AGI territory. Attempting to summarize using my limited
knowledge, Texai hopes to achieve that boostrapping via reasoning
Dr. Matthias Heger wrote:
Brad Pausen wrote The question I'm raising in this thread is more one of
priorities and allocation of scarce resources. Engineers and scientists
comprise only about 1% of the world's population. Is human-level NLU
worth the resources it has consumed, and will
Brad,
Your post describes your position *very* well, thanks.
But, it does not describe *how* or *why* your AI system might achieve domain
expertise any faster/better/cheaper than other narrow-AI systems (NLU
capable, embodied, or otherwise) on its way to achieving networked-AGI. The
list would
Brad Pausen wrote
The question I'm raising in this thread is more one of priorities and
allocation of scarce resources. Engineers and scientists comprise only
about 1% of the world's population. Is human-level NLU worth the resources
it has consumed, and will continue to consume, in the