There was one little line in this post that struck me, and I wanted to comment:

Quoting Ed Porter <[EMAIL PROTECTED]>:

With regard to performance, such systems are not even close to human brain
level but they should allow some interesting proofs of concepts

Mentioning some huge system. My thought was, wow, that's just sounds sad. But I guess it depends on what you mean by performance. One thing that computers now way exceed brain performance in is reliability of the operations. Sure, it's difficult to say what a basic brain operation is (is a synapse reaction equivalent to a multiply accumulate?), but one thing that can be said about them is that they aren't very reliable or precise. They have a sort of a range of operation, where they kind of will act in a certain way given an input. It's got to be really hard to get valuable behavior out of this kind of a system, so the brain uses massive redundancy. Now, it might well be that in addition to just the reliability, this kind of a system gets other value from it, like a nice probabilistic operation that has additional value in itself. Maybe the inherent unpredictability is part of what we mean by intelligence. Personally I suspect that to be true. But this all stands in great contrast to how computers naturally work--obeying information processing instructions with absolute precision (possibly error-free, depending on how you look it).

There is a sort of mismatch between good human brain behavior and good computer behavior. It seems like the AGI project is about making a computer act like a good brain. We can focus on how to get a computer to act in ways that are ideal for a brain to act intelligently. And by this I mean something like having some basic operations and systems that can be used in all situations. But I think it might also be good to try to think of it in terms of looking for the best ways for a computer to be intelligent. I'm a patchwork AGI kind of guy, and while surely there must be some general mechanism, it seems to make sense that there could also be many very finely crafted modules. Unfortunately, if we are restricting modules to human written modules, then that's the basic problem. A basic function of an AGI should be that it can write programs for itself to handle tasks. Or I guess for other systems. But if it can do that, then these programs don't need such huge amounts of computer power.
andi



-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to