Jim,

My NL patent is for a lossless approach - where the result is EXACTLY the
same as if it were computed using exhaustive methods. To illustrate, there
are literally tens of thousands of idioms in common English, so checking
for all of them at every point in the text would slow things WAY down.
However, every idiom has a least-frequently-used (LFU) word that can be
used to trigger a check for the remainder of the idiom's presence. Most
words are LFU words in just a few idioms, so you only need to check for
those few idioms for each input word, rather than checking for the tens of
thousands of idioms with each input word. This speeds things up by ~1000:1
or more.

Looking at neurons - most seem to work as probabilistic AND gates. Yea, I
know, they tend to add their inputs, but where those inputs represent the
logarithms of things, you are in effect multiplying them - which is what
you do with probabilities to determine their AND. So, you would determine
the least commonly satisfied conditions (input) and trigger re-computation
on changes in those inputs. As you mentioned, there would be some learning
involved, but you ONLY need to do that logic when the neuron is activated.
Probably the simplest nearly-zero overhead approach would be to do the
learning algorithm on every thousandth re-computation of a neuron's output.

Triggering doesn't work so well in "analog" areas like the visual
front-end, where it isn't apparent what to use as a trigger. However,
vision is (presently) far removed from textual processing.

Steve
==================

On Sun, Sep 20, 2015 at 7:53 AM, Jim Bromer <[email protected]> wrote:

> On Sat, Sep 19, 2015 at 11:20 PM, Steve Richfield <
> [email protected]> wrote:
>
>>
>> Then enters my approach of cleverly triggered processing, where
>> sufficient cleverness can collapse this WAY down, to something like 3X.
>> However this comes with an interesting cost - you must design your
>> program's structure around your selected cleverness, because there is
>> (usually) NO way to build it in later. This GREATLY ups the cost of
>> experimental coding.
>>
>> So, before you write your first line of code, you must figure out how you
>> are going to collapse the base of the exponent that is about to explode. I
>> patented the approach I saw for natural language and can see how related
>> approaches could work elsewhere.
>>
>> If you don't yet see how to do this, then you are nowhere near to being
>> able to write functional code. THIS is the "entrance exam".
>>
>
> When "dynamic space decomposition algebras" was mentioned I was able to
> find a simple example of that process and that helped me to understand what
> the term probably means. You mentioned "cleverly triggered processing," so
> I suspect that there is something conceptually common to both those
> methods. The problem is that for dynamic space decompositions algebras to
> work their application have to simplified through idealization. Here the
> problem is that for true generality these simplifications would have to be
> generated via learning and that implies that they either would be too
> numerous and diversified to be useful or a simplified set of idealizations
> would have to be applied to different overlapping layers of the problem.
> The problem with this is that since some aspects of the problem have to be
> understood very precisely there is no way to effectively do that for a
> highly varied possibility world.
>
> Jim Bromer
>
> On Sat, Sep 19, 2015 at 11:20 PM, Steve Richfield <
> [email protected]> wrote:
>
>> Jim,
>>
>> It's parallel to the chess playing problem. Each half-move you look ahead
>> multiplies the difficulty by ~20X, so the difference between an early
>> vacuum tube computer and the latest PCs amounts to an additional ~1.5 more
>> moves of lookahead.
>>
>> Really clever programming can shrink the 20X to something smaller,
>> perhaps 5X, but at some cost in being able to synthesize new strategies to
>> meet new situations - NOT a good handicap for a world champion (or an AGI).
>> There is a famous game where then world champion Lasker sacrificed his
>> queen with no prospect for making it up in an exchange - and then went on
>> to win the game a few moves later. NO present chess playing program cold
>> make such a move - or defend against such a move. This approach would get 3
>> more moves of lookahead going from vacuum tubes to PCs while sacrificing
>> some world-class performance.
>>
>> However, chess is HIGHLY constrained compared with natural language (or
>> anything else in the real world), where there are MANY more than 20
>> possible candidates for the next word (or other input). The result is that
>> jumping from vacuum tubes to PCs buys little improvement in performance!!!
>> Note that "modern" chatbots are little better than the original Eliza - and
>> for good reason.
>>
>> Then enters my approach of cleverly triggered processing, where
>> sufficient cleverness can collapse this WAY down, to something like 3X.
>> However this comes with an interesting cost - you must design your
>> program's structure around your selected cleverness, because there is
>> (usually) NO way to build it in later. This GREATLY ups the cost of
>> experimental coding.
>>
>> So, before you write your first line of code, you must figure out how you
>> are going to collapse the base of the exponent that is about to explode. I
>> patented the approach I saw for natural language and can see how related
>> approaches could work elsewhere.
>>
>> If you don't yet see how to do this, then you are nowhere near to being
>> able to write functional code. THIS is the "entrance exam".
>>
>> Steve
>> ============
>>
>> On Sat, Sep 19, 2015 at 7:25 PM, Jim Bromer <[email protected]> wrote:
>>
>>> On Fri, Sep 18, 2015 at 5:32 PM, Jim Bromer <[email protected]> wrote:
>>>
>>>> There is a possibility that the methods that we come up with would be
>>>> too slow only because there is something about them which is inherently
>>>> tends to be exponentially complex...The problem is that the source of
>>>> knowledge is going to be distributed so if a recursive improvement on some
>>>> initial guesses is going to be dependent on examining the different ways of
>>>> interpreting a situation then the numerous ambiguity-like possibilities
>>>> that have to be checked can require exponential number of steps. Every time
>>>> you try to improve a response you add more components of knowledge into the
>>>> problem.
>>>>
>>>
>>>
>>>> On Sat, Sep 19, 2015 at 3:11 PM, EdFromNH . <[email protected]> wrote:
>>>>
>>> The brain's neural net architecture is massively parallel.  It has tens
>>>> of billions of parallel processors (neurons).  It has hundreds of trillions
>>>> of interconnects (synapses).  It does what it does without the issue you
>>>> discuss creating much of a
>>>> problem on many tasks.
>>>>
>>>
>>> No. My comment was about programming, not about the brain. However, you
>>> do not know how many "parallel processors" the brain can effectively employ
>>> at one moment since you do not understand how the brain coordinates its
>>> activities and how it is able to produce thought. We know that the brain is
>>> capable of consecutive actions and they are (presumably) necessary for
>>> thought.
>>>
>>> It would be easy to incorporate simple massively parallel capabilities
>>> into computer memory chips. For instance we might have a parallel search
>>> which looks through (specialized) RAM (or flash-like types of memory) to
>>> find strings (or string-like occurrences of data) that are identical or
>>> which follow from some simple program. I call this Parallel Search Ram. Is
>>> this really enough to make agi possible or is there more to it than that?
>>>
>>> If massive parallelism was the solution then why wasn't it obvious that
>>> massively parallel computers were solving the problem when a great deal of
>>> money was spent in building them. And the fact is that our current networks
>>> are massively parallel. OK, software in general and AI in particular was at
>>> a somewhat more primitive stage at that time, but still, shouldn't
>>> parallelism have shown something when they were tried? If the brain's
>>> parallelism is the key then shouldn't our current network parallelism be
>>> adequate to demonstrate that it is? Watson employed parallelism once it
>>> could be designed to enhance a program that they developed on networks of
>>> desktops.
>>>
>>> I think what I am getting at is that your dismissal of what might be an
>>> essential contemporary problem by saying that the brain does what it does
>>> without the issue that I mentioned is a non-sequitur.
>>>
>>> On Sat, Sep 19, 2015 at 3:11 PM, EdFromNH . <[email protected]> wrote:
>>>
>>>> But evolution has shown that reaction speed is often more important
>>>> that always being correct.
>>>>
>>>
>>> That is relevant but the problem is that some aspects of handling
>>> situations require a great deal of correctness in order to get some
>>> traction at even elementary (or sub-) AGI stages.
>>>
>>> Jim Bromer
>>>
>>> On Sat, Sep 19, 2015 at 3:11 PM, EdFromNH . <[email protected]> wrote:
>>>
>>>> The brain's neural net architecture is massively parallel.  It has tens
>>>> of billions of parallel processors (neurons).  It has hundreds of trillions
>>>> of interconnects (synapses).  It does what it does without the issue you
>>>> discuss creating much of a problem on many tasks.  Of course, the brain
>>>> frequently makes mistakes.  But evolution has shown that reaction speed is
>>>> often more important that always being correct.
>>>>
>>>> There is a good chance we will be able to build computers having many
>>>> of these beneficial properties of the brain within 5 to 15 years.
>>>>
>>>> On Fri, Sep 18, 2015 at 5:32 PM, Jim Bromer <[email protected]>
>>>> wrote:
>>>>
>>>>> On Fri, Sep 18, 2015 at 5:32 PM, EdFromNH . <[email protected]>
>>>>> wrote:
>>>>>
>>>>>>
>>>>>> Yes - "present-day computers are orders of magnitude too slow to do
>>>>>> anything useful" as the computational architecture for an AGI.
>>>>>>
>>>>>
>>>>> There is a possibility that the methods that we come up with would be
>>>>> too slow only because there is something about them which is inherently
>>>>> tends to be exponentially complex. That is the problem that occurs when
>>>>> possible results have to be recursively improved on according
>>>>> to information that come from different sources to produce comparisons. 
>>>>> The
>>>>> problem is that the source of knowledge is going to be distributed so if a
>>>>> recursive improvement on some initial guesses is going to be dependent on
>>>>> examining the different ways of interpreting a situation then the numerous
>>>>> ambiguity-like possibilities that have to be checked can require
>>>>> exponential number of steps. Every time you try to improve a response you
>>>>> add more components of knowledge into the problem.
>>>>> Jim Bromer
>>>>>
>>>>> On Fri, Sep 18, 2015 at 5:32 PM, EdFromNH . <[email protected]>
>>>>> wrote:
>>>>>
>>>>>>
>>>>>> Steve,
>>>>>>
>>>>>> Yes - "present-day computers are orders of magnitude too slow to do
>>>>>> anything useful" as the computational architecture for an AGI.
>>>>>>
>>>>>> For 4 decades I have pissed people off in the AI community who were
>>>>>> saying that software, not hardware was the problem by saying the 
>>>>>> following
>>>>>>
>>>>>> "I have a relatively simple thesis: there's no reason to believe an
>>>>>> AI could have approximately the human-like intelligence of a person 
>>>>>> unless
>>>>>> it had within several magnitudes the computational capacity of the human
>>>>>> brain as measured by the metrics at which the human brain currently 
>>>>>> exceeds
>>>>>> computers by many orders of magnitude."
>>>>>>
>>>>>>
>>>>>> I had one AI programmer get hostile when I said that.  Another time,
>>>>>> I told one of the major speakers at the 1997 AAAI Conference that -- 
>>>>>> until
>>>>>> we had computers millions of times more powerful than that most of the AI
>>>>>> community could then get their hands on -- AIs would not be able to think
>>>>>> like humans.  He response was "I have no idea what I could do useful 
>>>>>> with a
>>>>>> computer a million times more powerful that I currently have."   To 
>>>>>> which I
>>>>>> responding, in my mind, "that only because you and the leadership in the 
>>>>>> AI
>>>>>> community hasn't thought much about it."
>>>>>>
>>>>>> I have been thinking about what I could do with machines having
>>>>>> trillions of bytes of memory and many billions of processing elements 
>>>>>> ever
>>>>>> since I took my year-long independent study my senior year at college
>>>>>> reading a long list of books and articles written for me by Marvin 
>>>>>> Minsky.
>>>>>> I was particularly influenced by Minksy brief K-Line Theory paper.
>>>>>>  (Although Deb Roy of MIT's Speechome project told me that K-line was 
>>>>>> first
>>>>>> developed by someone other than Minsky.)
>>>>>>
>>>>>> So yes, no even remotely human-like AGIs can be built without what
>>>>>> very expensive hardware.  That's why I am spending much of my time trying
>>>>>> to understand the architecture of the brain, because it is the "GI" that
>>>>>> AGI wants to at least match.  I am quite confident that we will be able 
>>>>>> to
>>>>>> make relatively inexpensive (for the cost of a premium automobile) AGI
>>>>>> brains in 5 to 15 years using neuromorphic architectures that 
>>>>>> substantially
>>>>>> match virtually all human cognitive capabilities, and exceed humans in 
>>>>>> many
>>>>>> capabilities by thousands or millions of times.
>>>>>>
>>>>>> Ed Porter
>>>>>>
>>>>>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>>> <https://www.listbox.com/member/archive/rss/303/10443978-6f4c28ac> |
>>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>>> <http://www.listbox.com>
>>>
>>
>>
>>
>> --
>> Full employment can be had with the stoke of a pen. Simply institute a
>> six hour workday. That will easily create enough new jobs to bring back
>> full employment.
>>
>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/24379807-653794b5> |
>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>> <http://www.listbox.com>
>>
>
> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/10443978-6f4c28ac> |
> Modify
> <https://www.listbox.com/member/?&;>
> Your Subscription <http://www.listbox.com>
>



-- 
Full employment can be had with the stoke of a pen. Simply institute a six
hour workday. That will easily create enough new jobs to bring back full
employment.



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to