Benjamin, On Mon, Mar 30, 2015 at 12:13 PM, Benjamin Kapp <[email protected]> wrote:
> If those things we think are "Multi threaded" and not single threaded, I > do not see this as a problem for our ability to create or understand AGI, > because we can write multi threaded algorithms (as software developers do > frequently to take advantage of all of those cores on processors we have > these days). Thus we are entirely capable of understanding and creating > multi threaded algorithms. > I don't think we are so much "multi threaded" as "parallel". The difference is that the number of "threads" approaches the number of neurons. Of course you can simulate such systems on single or multi-threaded systems, but it is slooooow. > > Shane Legg gave an equation to define intelligence [1,2]. > No, he proposed an equation to MEASURE intelligence that already exists. > If you take this equation as valid then what is more optimal than > something else is what increases intelligence, of course this doesn't tell > you what is the most optimal, but it does let you know if you are going in > the right direction or not. > ONLY once you have more than one sort of intelligence to compare with another - a place from which we remain distant. And if you don't accept this equation that is fine, but at least he defined > what it is he's talking about. Perhaps if you disagree with it you could > propose a better equation? > Let me start by dropping an extraneous term... At one point in my past career, I was in part in the business of guesstimating the number of undiscovered bugs in software, the features of which parallel AGI adaptability. This turns out to be proportional to the square of the length of that portion of the code that exceeds the minimum length to do the job. Hence, if you take the time to write code that is very nearly as short as possible, it will contain VERY few bugs, and hence will be more adaptable to varying input. In the early days of computers, where bits in RAM were made by hand, I worked on some projects with similar goals, and indeed they harbored few bugs. It soon became apparent that if the low-level functionality was well implemented, more "intelligence" was the product of relatively simple high-level routines consisting of lower-level subroutine calls - and hence came relatively cheaply in terms of complexity. Legg's erroneous term is his complexity term, as this is already contained in his functionality term, presuming of course that the system he is evaluating actually works. Of course, one could contrive systems where this is not the case, but I doubt that one could even contrive a FUNCTIONAL intelligent system that was needlessly more complex, yet worked better because of the needless additional complexity. Of course, if it worked better, the additional complexity would NOT be needless, thereby showing this exercise to be impossible, and thereby via reduction as absurdum, Legg's complexity term is extraneous. Complexity will doubtless grow with functionality until some critical (sub-human?) point is reached, where additional complexity buys little/nothing. Indeed, the very arguments between ad-hoc AI, ML AI, NN AI, etc., center around this complexity issue. To illustrate, the ad-hoc people (e.g. Ben) would agree with me that complexity by itself means nothing, yet it is obvious to nearly everyone that all that complex code restricts rather than enhancing functionality - in short they are paying for that (length-minimum)^2 term that insinuates itself into functionality. I think you need to better found your argument that AGI must > ... you forgot "initially" ... be recursively designed with some better reasons. > Your omission above shows you missed my point. I am NOT arguing that AGI must be recursively defined, but rather that it necessarily MUST be INITIALLY defined recursively, for the simple reason that a single level of a recursion is MUCH easier to humans to understand and implement, than is an entire hyper-complex recursive system unwound and made optimal in a non-recursive definition. In short, this is MUCH more about our very human limitations as implementers, than it is about the requirements of potential future AGI implementations. The failures of past AGI efforts make my point pretty clearly. Example: I was a passenger on a 70' yacht that was maneuvering in tight quarters in the Chula Vista marina, when it suddenly went out of control and rammed another yacht in front if it - and kept pushing forward!!! With little idea what the problems might be, and plenty of people at the controls and the point of contact, I decided to run to the engine room and see if I could do anything there. I noticed a metal rod leading from an upper level that was jerking back and forth, which I followed to a broken mechanism to translate that motion to the transmission. They obviously didn't like what gear they were in, so I bumped the lever on the transmission and it changed to neutral, but the jerking continued so I bumped the lever on the transmission again, probably into reverse, and the jerking stopped. However, a few seconds later the jerking started again, so I shifted bumped the transmission lever again. This continued back and forth for ~5 minutes, whereupon the engine stopped and I emerged. I was told of all that I had "missed", whereupon I told them the rest of the story... With my substantial human intellect I had to insert SOME sort of useful functionality into a desperate situation. I could NOT identify the desired gear, but I could recognize when they didn't like the gear they were in, so I just kept changing things until satisfaction was achieved - as evidenced by the rod from the gear shift lever from 2 stories above me stopping its jerking. Here, I was simulating a simple one-moving-part broken mechanism, only with additional functionality to deal with the lack of knowledge of the desired gear. I suspect that neurons and other cells contain such "smarts" - to recognize when things are better and worse, and adapt to improve these indications. Of course, no one knows what the "threshold" is. To illustrate this point, there is a recent CNBC Originals episode (available on Hulu.com) about GM's ignition switch recall that is worth watching. The point that sticks out (for me) but is NOT explicitly mentioned in the episode is that only women have been hurt by this, and most of them have been drunk. It seems to take a certain lack of mechanical skill to be wounded/killed by a defective ignition switch. Here, you must not only solve this relatively simple puzzle, but solve it quickly enough not to be killed by it. This is a timed test. The patent office is full of simple solutions to challenging problems. Many of these are on a scale where you might think that a single cell might do such things, and indeed some inventions have been inspired by nature. Now, "all" we need do is characterize such "cellular invention" into a simple formula like Legg's formula. Steve ========================= > > Ref > [1] https://www.youtube.com/watch?v=V6umr1OP8uo > [2] http://arxiv.org/abs/0712.3329 > > On Mon, Mar 30, 2015 at 2:45 PM, Steve Richfield < > [email protected]> wrote: > >> Aaron, >> >> On Mon, Mar 30, 2015 at 8:40 AM, Aaron Hosford <[email protected]> >> wrote: >> >>> I am in agreement with John here regarding human intelligence. We are >>> components of a distributed learning algorithm, whose accumulated >>> intelligence far exceeds the capabilities of any one of us taken in >>> isolation. Likewise for our cells with respect to our bodies. >>> >>> Evolution is the key driver here. It's the one algorithm I know of that >>> bootstraps intelligence from the ground up into the recursive networks you >>> pointed out, Steve. The question remains, though: Exactly *what *is >>> being generated by this bootstrapping process? What is it that is gained >>> through this recursive organizational structure? Where is the value added? >>> What can be accomplished through such a recursive organizing principle that >>> cannot be accomplished directly? (I am trying to point the discussion in >>> the right direction by asking the right questions.) >>> >> >> There is a problem with this question, that seems to have been at the >> heart of the past lack of AGI success. We have NO clue how we work. Our >> concept of consciousness has been conjured up by our brains as a model, and >> there is absolutely NO reason to believe that any such thing actually >> exists, and plenty of reason to believe it does not, because the things we >> think cannot possibly be the result of a single-thread process. Perhaps we >> sense the "success paths" from problems to solutions, but we certainly do >> NOT have any idea how they are found, etc. >> >> Further, there is a rule that you can NOT understand the operation of any >> optimal system through observation, because optimality could be computed in >> any of a vast number of ways. It is only through observation of that which >> is SUBoptimal that we can understand complex systems through observation. >> However in AGI, we have little idea what is optimal, and so we cannot >> recognize most of that which is suboptimal, and so there is no effective >> starting point to develop an understanding. >> >> Given the apparent theoretical hopelessness of past "direct" attempts, it >> appears that the first AGI absolutely MUST be of recursive design. Perhaps >> by observing its internal operation with suitable debugging tools we can >> learn enough to then produce a direct design. >> >> Can you punch any holes in this logic? >> >> Steve >> =================== >> >>> >>> On Sun, Mar 29, 2015 at 2:09 PM, John Rose <[email protected]> >>> wrote: >>> >>>> It depends, a minimal implementation as a single cell AGI or a single >>>> cell with a trillion duplicates AGI. I was hinting towards the latter. But >>>> here is something else to think of – >>>> >>>> >>>> >>>> Suppose the way we see things is not really how it is. And that happens >>>> often throughout history. Suppose that the way we see duplicates of things >>>> is wrong. So that a trillion bacteria duplicates are actually one organism. >>>> Change the dimensionality of the observer. It would be the same with >>>> people. To us individual people agents look like independent entities but >>>> if you tweak the dimensionality of the observation the whole human race >>>> over time can appear as one continuous organism. And as far as intelligence >>>> goes that is more correct IMO since we are multi-agent IOW one tabula rasa >>>> human isolated from the species is not intelligent and dies immediately. >>>> >>>> >>>> >>>> Just something to think about as it may solve related issues… >>>> >>>> >>>> >>>> John >>>> >>>> >>>> >>>> *From:* Steve Richfield [mailto:[email protected]] >>>> *Sent:* Saturday, March 28, 2015 4:19 PM >>>> *To:* AGI >>>> *Subject:* [agi] 1% >>>> >>>> >>>> >>>> John, et al, >>>> >>>> We seem to have two subjects that are merging. I started out discussing >>>> potential halfway points - while you started out discussing single-cell >>>> intelligence. >>>> >>>> Suppose for a moment there is a method and associated undiscovered >>>> mathematics underlying intelligence, where the "minimum implementation" of >>>> intelligence might be VERY small - like single cell. >>>> >>>> There is plenty of evidence of this in experiments on the lobster >>>> stomagastric ganglion, where each cell does a specific job that has been >>>> identified in the laboratory. However, introduce a birth defect where fewer >>>> cells survive, and they organize differently to do the same job but less >>>> precisely. >>>> >>>> The behavior of some bacteria is VERY complex, complete with seek and >>>> avoid behaviors, eating habits, etc. >>>> >>>> >>>> >>>> Consider the following recursive definition of AGI: >>>> >>>> 1. Construct a minimal AGI. >>>> >>>> 2. Connect a bunch of them into a network. >>>> >>>> 3. Construct a network of the above networks. >>>> >>>> 4. Construct a network of the above networks. >>>> >>>> 5. etc. >>>> >>>> Perhaps the ultimate AGI program will look like a recursive factorial >>>> computation, only replacing the multiplication with a lower level AGI. >>>> >>>> >>>> >>>> In society, we have cells, networks of cells that form regions of the >>>> brain, networks of regions that constitute humans, networks of humans ... >>>> >>>> Perhaps what is missing in society is what is already there at the >>>> cellular level?!!! >>>> >>>> Perhaps "all" that is now missing in AGI is a theoretical understanding >>>> of how a single cell **IS** a complete minimum implementation of an AGI?!!! >>>> >>>> If true, this might bring AGI a LOT closer - and predict the failure of >>>> present approaches. At least this deserves a serious look-see. >>>> >>>> Steve >>>> >>>> ================== >>>> >>>> On Sat, Mar 28, 2015 at 6:24 AM, John Rose <[email protected]> >>>> wrote: >>>> >>>> 1 day ago - "Obama Administration Releases National Action Plan to >>>> Combat Antibiotic-Resistant Bacteria" - $1.2 billion >>>> >>>> Very interesting. The microbes overcome everything we throw at them how >>>> could they be intelligent? >>>> >>>> People laugh about the concept of microbial intelligence. By many >>>> definitions they are more intelligent than us, we may lose this battle. >>>> Let's see, if intelligence has mass which I'm sure no one would dispute, >>>> and if we add up the mass of all human brains and compare that with the >>>> mass of all related molecular microbial intelligence I would say that by >>>> far microbes have more intelligence. Definitely. >>>> >>>> Or is that calculation, meant to be humorous, wrong? Intelligence >>>> doesn't have mass... >>>> >>>> "Microbes have more intelligence" <=> "Microbes are more intelligent" >>>> >>>> At some point does "more intelligence" beat out the "more intelligent". >>>> >>>> John >>>> >>>> >>>> >>>> >>>> >>>> ------------------------------------------- >>>> AGI >>>> Archives: https://www.listbox.com/member/archive/303/=now >>>> RSS Feed: >>>> https://www.listbox.com/member/archive/rss/303/10443978-6f4c28ac >>>> Modify Your Subscription: https://www.listbox.com/member/?& >>>> Powered by Listbox: http://www.listbox.com >>>> >>>> >>>> >>>> >>>> -- >>>> >>>> Full employment can be had with the stoke of a pen. Simply institute a >>>> six hour workday. That will easily create enough new jobs to bring back >>>> full employment. >>>> >>>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> >>>> <https://www.listbox.com/member/archive/rss/303/248029-82d9122f>| >>>> Modify <https://www.listbox.com/member/?&> Your Subscription >>>> >>>> <http://www.listbox.com> >>>> >>>> >>>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> >>>> <https://www.listbox.com/member/archive/rss/303/23050605-2da819ff> | >>>> Modify <https://www.listbox.com/member/?&> Your Subscription >>>> <http://www.listbox.com> >>>> >>> >>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> >>> <https://www.listbox.com/member/archive/rss/303/10443978-6f4c28ac> | >>> Modify <https://www.listbox.com/member/?&> Your Subscription >>> <http://www.listbox.com> >>> >> >> >> >> -- >> Full employment can be had with the stoke of a pen. Simply institute a >> six hour workday. That will easily create enough new jobs to bring back >> full employment. >> >> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> >> <https://www.listbox.com/member/archive/rss/303/26973278-698fd9ee> | >> Modify <https://www.listbox.com/member/?&> Your Subscription >> <http://www.listbox.com> >> > > *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/10443978-6f4c28ac> | > Modify > <https://www.listbox.com/member/?&> > Your Subscription <http://www.listbox.com> > -- Full employment can be had with the stoke of a pen. Simply institute a six hour workday. That will easily create enough new jobs to bring back full employment. ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
