On Fri, Sep 18, 2015 at 5:32 PM, Jim Bromer <[email protected]> wrote:
> There is a possibility that the methods that we come up with would be too > slow only because there is something about them which is inherently tends > to be exponentially complex...The problem is that the source of knowledge > is going to be distributed so if a recursive improvement on some initial > guesses is going to be dependent on examining the different ways of > interpreting a situation then the numerous ambiguity-like possibilities > that have to be checked can require exponential number of steps. Every time > you try to improve a response you add more components of knowledge into the > problem. > > On Sat, Sep 19, 2015 at 3:11 PM, EdFromNH . <[email protected]> wrote: > The brain's neural net architecture is massively parallel. It has tens of > billions of parallel processors (neurons). It has hundreds of trillions of > interconnects (synapses). It does what it does without the issue you > discuss creating much of a > problem on many tasks. > No. My comment was about programming, not about the brain. However, you do not know how many "parallel processors" the brain can effectively employ at one moment since you do not understand how the brain coordinates its activities and how it is able to produce thought. We know that the brain is capable of consecutive actions and they are (presumably) necessary for thought. It would be easy to incorporate simple massively parallel capabilities into computer memory chips. For instance we might have a parallel search which looks through (specialized) RAM (or flash-like types of memory) to find strings (or string-like occurrences of data) that are identical or which follow from some simple program. I call this Parallel Search Ram. Is this really enough to make agi possible or is there more to it than that? If massive parallelism was the solution then why wasn't it obvious that massively parallel computers were solving the problem when a great deal of money was spent in building them. And the fact is that our current networks are massively parallel. OK, software in general and AI in particular was at a somewhat more primitive stage at that time, but still, shouldn't parallelism have shown something when they were tried? If the brain's parallelism is the key then shouldn't our current network parallelism be adequate to demonstrate that it is? Watson employed parallelism once it could be designed to enhance a program that they developed on networks of desktops. I think what I am getting at is that your dismissal of what might be an essential contemporary problem by saying that the brain does what it does without the issue that I mentioned is a non-sequitur. On Sat, Sep 19, 2015 at 3:11 PM, EdFromNH . <[email protected]> wrote: > But evolution has shown that reaction speed is often more important that > always being correct. > That is relevant but the problem is that some aspects of handling situations require a great deal of correctness in order to get some traction at even elementary (or sub-) AGI stages. Jim Bromer On Sat, Sep 19, 2015 at 3:11 PM, EdFromNH . <[email protected]> wrote: > The brain's neural net architecture is massively parallel. It has tens of > billions of parallel processors (neurons). It has hundreds of trillions of > interconnects (synapses). It does what it does without the issue you > discuss creating much of a problem on many tasks. Of course, the brain > frequently makes mistakes. But evolution has shown that reaction speed is > often more important that always being correct. > > There is a good chance we will be able to build computers having many of > these beneficial properties of the brain within 5 to 15 years. > > On Fri, Sep 18, 2015 at 5:32 PM, Jim Bromer <[email protected]> wrote: > >> On Fri, Sep 18, 2015 at 5:32 PM, EdFromNH . <[email protected]> wrote: >> >>> >>> Yes - "present-day computers are orders of magnitude too slow to do >>> anything useful" as the computational architecture for an AGI. >>> >> >> There is a possibility that the methods that we come up with would be too >> slow only because there is something about them which is inherently tends >> to be exponentially complex. That is the problem that occurs when possible >> results have to be recursively improved on according to information that >> come from different sources to produce comparisons. The problem is that the >> source of knowledge is going to be distributed so if a recursive >> improvement on some initial guesses is going to be dependent on >> examining the different ways of interpreting a situation then the numerous >> ambiguity-like possibilities that have to be checked can require >> exponential number of steps. Every time you try to improve a response you >> add more components of knowledge into the problem. >> Jim Bromer >> >> On Fri, Sep 18, 2015 at 5:32 PM, EdFromNH . <[email protected]> wrote: >> >>> >>> Steve, >>> >>> Yes - "present-day computers are orders of magnitude too slow to do >>> anything useful" as the computational architecture for an AGI. >>> >>> For 4 decades I have pissed people off in the AI community who were >>> saying that software, not hardware was the problem by saying the following >>> >>> "I have a relatively simple thesis: there's no reason to believe an AI >>> could have approximately the human-like intelligence of a person unless it >>> had within several magnitudes the computational capacity of the human brain >>> as measured by the metrics at which the human brain currently exceeds >>> computers by many orders of magnitude." >>> >>> >>> I had one AI programmer get hostile when I said that. Another time, I >>> told one of the major speakers at the 1997 AAAI Conference that -- until we >>> had computers millions of times more powerful than that most of the AI >>> community could then get their hands on -- AIs would not be able to think >>> like humans. He response was "I have no idea what I could do useful with a >>> computer a million times more powerful that I currently have." To which I >>> responding, in my mind, "that only because you and the leadership in the AI >>> community hasn't thought much about it." >>> >>> I have been thinking about what I could do with machines having >>> trillions of bytes of memory and many billions of processing elements ever >>> since I took my year-long independent study my senior year at college >>> reading a long list of books and articles written for me by Marvin Minsky. >>> I was particularly influenced by Minksy brief K-Line Theory paper. >>> (Although Deb Roy of MIT's Speechome project told me that K-line was first >>> developed by someone other than Minsky.) >>> >>> So yes, no even remotely human-like AGIs can be built without what very >>> expensive hardware. That's why I am spending much of my time trying to >>> understand the architecture of the brain, because it is the "GI" that AGI >>> wants to at least match. I am quite confident that we will be able to make >>> relatively inexpensive (for the cost of a premium automobile) AGI brains in >>> 5 to 15 years using neuromorphic architectures that substantially match >>> virtually all human cognitive capabilities, and exceed humans in many >>> capabilities by thousands or millions of times. >>> >>> Ed Porter >>> >>> ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
