Re: [agi] Low I.Q. AGI
On Tue, Apr 17, 2007 at 02:37:01PM -0400, Eric Baum wrote: > Could you be more specific, please? What specific applications do you > think are high value? Anything they want to do, and don't have to do to have the bills paid. I presume many here have loftier aspirations than their dayjob (but for a lucky few, for whom both are identical). The interesting capability threshold in AI is autopoietic automation a la http://www.molecularassembler.com/KSRM.htm which is about insect-level for macroscale self-replicators in an unsupportive environment. -- Eugen* Leitl http://leitl.org";>leitl http://leitl.org __ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
Re: [agi] Low I.Q. AGI
Eugen> Ability to instantiate vanilla experts at the drop of the hat Eugen> (without having to spend some 30 years, megabucks and a failure Eugen> rate of 98%) is a major advantage already. In fact, automation Eugen> about as smart as an insect can completely transform Eugen> transportation, manufacturing, military, and a few other Eugen> disciplines, leaving people to deal with with more worthwhile Eugen> tasks (specialization is for insects). Could you be more specific, please? What specific applications do you think are high value? - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
Re: [agi] Low I.Q. AGI
=> Singularity automatic assumption here earlier (give me 1000 times more time than Einstein to think up Relativity Theory and I still couldn't; give me 1000 times more data and I'll be seeing less, not more forest), let me add corollaries to/musings about Jef's argument: (1) if (by force) we confine a super-AGI to a single problem situation or even our own limited environment for "long enough" (ignore the ethical slavery aspect for a moment), won't it go crazy - just like many geniuses go crazy or at the very least very eccentric after a relatively short life of intensive intellectual creativity For any advanced system such as this that we expect to interact and learn we would not be able to put it in a room alone and so "Go" Given that, yes it would eventually devolve into some bad data and bad conclusions. We will have to interact, guide and work with these inteligences, to insure they are not diverging down a different path. A simple example of that is a path finding algorithm, if we see that it is trying to get to a northern city along a road, but going many miles south, for no reason, we can change things and say, hey, go this way instead, byt changing algorithms, or data or other direction. (2) will we recognise the difference between AGI genius and AGI craziness even at the early stage in its life - we hardly do recognize it in human geniuses (and remember that the parameters in a normal human only needs to be slightly off before (s)he is considered crazy - it'll be hard enough to get the parameters right for our human-level AGI) This is why AGI's will need to have a high accountability, they will need to explain their reasons to humans and experts, and be able to justify why they suggest a certain route. For instance, in the movie "Idiocracy" the future humans use Gatorade to water all the crops. When asked why they reply, "Cause its gots the stuff a body needs" and dont know anything other than that catch phrase. Crops may actually do well from the ingredients of Gatorade, but if a computer suggested this, and didnt have any explanation why, we would definitely think they were crazy, unless they had many experiments that showed good effects. (3) once/if it goes off in its own super-intelligence space (likely to be in intellectual domains such as maths) I doubt that we will ever be able to recognize what it does (try reading an advanced maths, physics or theology/philosophy book) Correct :} Unfortunately, once it reaches some point in the future, it will suggest something, and explain it, but the explanation itself could be beyond our comprehension. What we do at that point is unknown. This type of work is being done with the Project Halo http://www.projecthalo.com, Where the AI's had to pass a chemistry exam, but it was not enough that they could answer the questions correctly, which they did very well in, but thye had to explain their answers in words, concisely and easy to understand. This prooved to be much harder, but they still passed an API Chemistry exam. Working with my 4-year old daughter, I see her doing some very crazy things, but I know she is interacting with the world, and seeing what works and what doesnt. She put a shirt over her nightgown, no harm no foul, didnt hurt or cause any trouble, but I had to tell her, no, you dont do that. I think that is the way much of the AGI learning will need to get down to. If we can create a very basic framework that will allow complex interactions with the AGI and for the AGI to have great freedom to try things on its own, and be corrected, or other suggestions to be made to it, would allow it to grow naturally in the environment. James Ratcliff Jean-Paul Van Belle Jean-Paul Van Belle <[EMAIL PROTECTED]> wrote:Since I voiced my concern with the AGI Department of Information Systems Email: [EMAIL PROTECTED] Phone: (+27)-(0)21-6504256 Fax: (+27)-(0)21-6502280 Office: Leslie Commerce 4.21 >"Jef Allbright" [EMAIL PROTECTED]> 2007/04/15 21:40:06 >> >While such a machine intelligence will quickly far exceed human >capabilities, from its own perspective it will rapidly hit a wall due >to having exhausted all opportunities for effective interaction with >its environment. It could then explore an open-ended possibility >space à la schmidhuber, but such increasingly detached exploration >will be increasingly detached from "intelligence" in an effective >sense. >>On 4/15/07, Pei Wang <[EMAIL PROTECTED]> wrote: >> However, to me "Singularity" is a stronger claim than "superhuman >> intelligence". It implies that the intelligence of AI will increase >> exponentially, to a point that is shorter than what we can perceive or >> understand. That is what I'm not convinced. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?&; _
Re: [agi] Low I.Q. AGI
>I'm not clear on why he thinks human level intelligence is >"understandable", or even what he means by this. As Ben stated there are really two different issues of "understanding" 1. Understanding the code and process. 2. Understanding the intelligence (and actions taken by it) 1. Your right and it is known for most complex systems, Jets, computers, the internet, that no one person "knows" or understands everything about how they work.. but they do work and can be created by groups, and parts can be understood if studied. A general understanding of what a jet is and does is understood. 2. However understanding the intelligence would seem on one level to be a Requirement for having a human level AGI. IE if you have the AGI there and it is acting in a bizare and strange manner, and cant explain why, then we cannot really say it is a human level AGI.. we can only see that it is a machine that acts randomly... (we have enough of those already) But to understand the intelligence, and for the AGI to be useful in the world, it really does have the need to be able to explain itself. It shoudl list and give reasons of why it wants to put a fin underneath the jet at a 45' angle, that it will increase stability, and show a graph or the math behind his justification. Then this smaller subset problem can more easily be understood by a person or group of experts. At some time an AGI may become so advanced that the reasons behind any one action are so complex that we cannot understand or follow it. But at that point I believe it will have surpassed the label of "human level" and be something more. Then we would have to trust the machines, based on past performance, or other procedures would be used. Or the machine may have to proove in test that its suggestions are good and work. James Ratcliff Eric Baum <[EMAIL PROTECTED]> wrote: Pei> According to my belief, the way to create AGI is to have a Pei> general theory of intelligence, which should cover the common Pei> principle under all kinds of intelligent systems, including human Pei> intelligence, computer intelligence, etc., even alien Pei> intelligence and superhuman AGI. Therefore, this theory should Pei> also cover your AGI0 to AGIn. According to my belief, that I also claim to have published a strong case for, we have such a theory, in which the common principle underlying intelligence is that of Occam programs, that are computationally hard to extract. (I don't mean a program in the Occam language, a program constructed according to an extrapolated Occam's razor.) Also according to this belief, "understanding" is comprised of having such an Occam program that exploits underlying structure in order to generalize. According to this belief, unfortunately, the Occam program underlying our intelligence is itself unlikely to admit any more compact Occam program understanding it, and thus may be inherently not understandable. According to this picture, if we can succeed in creating a Human Level Intelligence (according to this picture, there roughly speaking doesn't exist any truly "general" intelligence) the way we will do that will be by building some structures/code that then computes and builds other structures/code that comprises the code of the Human Level Intelligence. The actual Human Level Intelligence will likely not be understandable in any meaningful sense. Ben's comments, and to some extent his approach to AGI of building code and then hoping when run it will produce a complex set of patterns that do stuff seems somewhat related to this, except for some reason he stipulates that human intelligence is understandable. I'm not clear on why he thinks human level intelligence is "understandable", or even what he means by this. Richard> efforts (some people seem to think that there is something Richard> inherently impossible about a human being able to design Richard> something smarter than itself, but that idea is really just Richard> science-fiction hearsay, not grounded in any real Richard> limitations). Well, no it is grounded in real limitations. I doubt, Richard, that even you think you could "design" a human level intelligence by hand, any more than you could personally design a mirage jet, the blueprints for which filled a warehouse. At the very least you would want to use a computer, and write code for the computer, and have the computer do a lot of the design for you by running the code. At the end of that process, you wouldn't necessarily "understand" much about how that design worked. And if the very guts of the reason that design worked are because it contains programs that were output by finding approximate solutions to computationally intractable problems, you'd be in real trouble. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?&; ___ James Ratcliff - http://falazar.
Re: [agi] Low I.Q. AGI
James, I addressed these issues in http://nars.wang.googlepages.com/wang.AI_Definitions.pdf Pei On 4/17/07, James Ratcliff <[EMAIL PROTECTED]> wrote: Pei, First it would seem you need to come to a consensus definition of "intelligence" and Im not sure how much your theory would or would need to cover anything beyond the definition there? James Ratcliff Pei Wang <[EMAIL PROTECTED]> wrote: Well, I surely don't mean AIXI type of theory. I believe that all kinds of intelligence can be explained as the capability of adaptation with insufficient knowledge and resources. I understand that you don't share this understanding of intelligence. Pei On 4/15/07, Benjamin Goertzel wrote: > > > > > > > > According to my belief, the way to create AGI is to have a general > > theory of intelligence, which should cover the common principle under > > all kinds of intelligent systems, including human intelligence, > > computer intelligence, etc., even alien intelligence and superhuman > > AGI. Therefore, this theory should also cover your AGI0 to AGIn. > > > > Ahhh > > Well, this gets at the crux of our disagreement. > > I have my doubts that such a theory is possible. I think it may be > possible to create a general theory of "roughly human level intelligence" > ... > just as Hutter and colleagues seem to be hot on the trail of a general > theory of "near infinite computational power intelligence". > > But I suspect that computer systems with processing power and memory > vastly greater than humans but vastly less than is needed for algorithms > like Hutter's AIXItl to be possible, will display forms of intelligence that > aren't covered by either the Hutter-type theories, nor the theories covering > roughly-human-level intelligence... > > True, there may be commonalities between sub-AIXItl-level superhuman, human > level, and AIXItl level intelligences. But, I suspect the differences will > be > at least as dramatic... > > As examples, I think that the sorts of self, awareness and "perceived free > will" that characterize human mind may not apply to all superhuman > intelligences. > > I can certainly imagine a superhuman AGI whose cognition is governed by > complex, emergent patterns that are beyond human comprehension. I might > be able to understand in some general sense that its behaviors are guided > by emergent patterns, that it seems to be engaged in calculating > probabilities, > etc. -- but the main structures and dynamics guiding its cognition might be > new principles, which apply only to minds vastly smarter than humans and > can't be grokked by mere human brains... > > -- Ben > > > > > This list is sponsored by AGIRI: http://www.agiri.org/email > To unsubscribe or change your options, please go to: > http://v2.listbox.com/member/?&; - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?&; ___ James Ratcliff - http://falazar.com Looking for something... Ahhh...imagining that irresistible "new car" smell? Check out new cars at Yahoo! Autos. This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?&; - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
Re: [agi] Low I.Q. AGI
Jef, There shouldnt be any wall there truly, and that would first have to exhaust EVERY means of data input via text, video, audio etc. At that point it would be left with limiting physical interactions via interacting with a human by talking etc, or interacting directly in teh world via a physical body, and mental introspection about the world and interactions. By this time though, the vast amount of information about the world would let it conjecture infinitely about how the world acts, and suggest good physical tests to perform on the environment. James Ratcliff Jef Allbright <[EMAIL PROTECTED]> wrote:I too generally agree with the improving intelligence scenario Richard described, but would like to point out a rarely appreciated aspect: While such a machine intelligence will quickly far exceed human capabilities, from its own perspective it will rapidly hit a wall due to having exhausted all opportunities for effective interaction with its environment. It could then explore an open-ended possibility space à la schmidhuber, but such increasingly detached exploration will be increasingly detached from "intelligence" in an effective sense. - Jef - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?&; ___ James Ratcliff - http://falazar.com Looking for something... - Ahhh...imagining that irresistible "new car" smell? Check outnew cars at Yahoo! Autos. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
Re: [agi] Low I.Q. AGI
Pei, First it would seem you need to come to a consensus definition of "intelligence" and Im not sure how much your theory would or would need to cover anything beyond the definition there? James Ratcliff Pei Wang <[EMAIL PROTECTED]> wrote: Well, I surely don't mean AIXI type of theory. I believe that all kinds of intelligence can be explained as the capability of adaptation with insufficient knowledge and resources. I understand that you don't share this understanding of intelligence. Pei On 4/15/07, Benjamin Goertzel wrote: > > > > > > > > According to my belief, the way to create AGI is to have a general > > theory of intelligence, which should cover the common principle under > > all kinds of intelligent systems, including human intelligence, > > computer intelligence, etc., even alien intelligence and superhuman > > AGI. Therefore, this theory should also cover your AGI0 to AGIn. > > > > Ahhh > > Well, this gets at the crux of our disagreement. > > I have my doubts that such a theory is possible. I think it may be > possible to create a general theory of "roughly human level intelligence" > ... > just as Hutter and colleagues seem to be hot on the trail of a general > theory of "near infinite computational power intelligence". > > But I suspect that computer systems with processing power and memory > vastly greater than humans but vastly less than is needed for algorithms > like Hutter's AIXItl to be possible, will display forms of intelligence that > aren't covered by either the Hutter-type theories, nor the theories covering > roughly-human-level intelligence... > > True, there may be commonalities between sub-AIXItl-level superhuman, human > level, and AIXItl level intelligences. But, I suspect the differences will > be > at least as dramatic... > > As examples, I think that the sorts of self, awareness and "perceived free > will" that characterize human mind may not apply to all superhuman > intelligences. > > I can certainly imagine a superhuman AGI whose cognition is governed by > complex, emergent patterns that are beyond human comprehension. I might > be able to understand in some general sense that its behaviors are guided > by emergent patterns, that it seems to be engaged in calculating > probabilities, > etc. -- but the main structures and dynamics guiding its cognition might be > new principles, which apply only to minds vastly smarter than humans and > can't be grokked by mere human brains... > > -- Ben > > > > > This list is sponsored by AGIRI: http://www.agiri.org/email > To unsubscribe or change your options, please go to: > http://v2.listbox.com/member/?&; - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?&; ___ James Ratcliff - http://falazar.com Looking for something... - Ahhh...imagining that irresistible "new car" smell? Check outnew cars at Yahoo! Autos. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
Designing something smarter than yourself [WAS Re: [agi] Low I.Q. AGI]
Eric Baum wrote: Richard> efforts (some people seem to think that there is something Richard> inherently impossible about a human being able to design Richard> something smarter than itself, but that idea is really just Richard> science-fiction hearsay, not grounded in any real Richard> limitations). Well, no it is grounded in real limitations. I doubt, Richard, that even you think you could "design" a human level intelligence by hand, any more than you could personally design a mirage jet, the blueprints for which filled a warehouse. At the very least you would want to use a computer, and write code for the computer, and have the computer do a lot of the design for you by running the code. At the end of that process, you wouldn't necessarily "understand" much about how that design worked. And if the very guts of the reason that design worked are because it contains programs that were output by finding approximate solutions to computationally intractable problems, you'd be in real trouble. A bit of confusion going on here, I think. I was not talking about a human 'understanding' the design of something smarter than a human -- that point is being debated in parallel, and is quite different from what I said. I was only talking about the pop-science idea that a human, because it has a certain level of intelligence, could never in principle design something that could then become smarter than the human. It's a (false) generalization of the idea that you cannot pull yourself up by your own bootstraps. Only a small point: I don't think you would agree with the position I was trying to oppose, there. But meanwhile, about the parallel question of whether a human could *understand* a human-level intelligence. The points you make above could be applied to an "aircraft designer". Such a person could design a new aircraft perfectly well... in a certain sense. They would not be qualified to design, say, all the details of the inflight entertainment system, down to every last transistor in the amplifier of the sound system -- but then, we wouldn't say "Ha! You don't really know how to design an aircraft!" Richard - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
Re: [agi] Low I.Q. AGI
Since I voiced my concern with the AGI Department of Information Systems Email: [EMAIL PROTECTED] Phone: (+27)-(0)21-6504256 Fax: (+27)-(0)21-6502280 Office: Leslie Commerce 4.21 => Singularity automatic assumption here earlier (give me 1000 times more time than Einstein to think up Relativity Theory and I still couldn't; give me 1000 times more data and I'll be seeing less, not more forest), let me add corollaries to/musings about Jef's argument: (1) if (by force) we confine a super-AGI to a single problem situation or even our own limited environment for "long enough" (ignore the ethical slavery aspect for a moment), won't it go crazy - just like many geniuses go crazy or at the very least very eccentric after a relatively short life of intensive intellectual creativity (2) will we recognise the difference between AGI genius and AGI craziness even at the early stage in its life - we hardly do recognize it in human geniuses (and remember that the parameters in a normal human only needs to be slightly off before (s)he is considered crazy - it'll be hard enough to get the parameters right for our human-level AGI) (3) once/if it goes off in its own super-intelligence space (likely to be in intellectual domains such as maths) I doubt that we will ever be able to recognize what it does (try reading an advanced maths, physics or theology/philosophy book) Jean-Paul Van Belle >"Jef Allbright" [EMAIL PROTECTED]> 2007/04/15 21:40:06 >> >While such a machine intelligence will quickly far exceed human >capabilities, from its own perspective it will rapidly hit a wall due >to having exhausted all opportunities for effective interaction with >its environment. It could then explore an open-ended possibility >space à la schmidhuber, but such increasingly detached exploration >will be increasingly detached from "intelligence" in an effective >sense. >>On 4/15/07, Pei Wang <[EMAIL PROTECTED]> wrote: >> However, to me "Singularity" is a stronger claim than "superhuman >> intelligence". It implies that the intelligence of AI will increase >> exponentially, to a point that is shorter than what we can perceive or >> understand. That is what I'm not convinced. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
Re: [agi] Low I.Q. AGI
Hi Eric, According to my belief, that I also claim to have published a strong case for, we have such a theory, in which the common principle underlying intelligence is that of Occam programs, that are computationally hard to extract. (I don't mean a program in the Occam language, a program constructed according to an extrapolated Occam's razor.) Also according to this belief, "understanding" is comprised of having such an Occam program that exploits underlying structure in order to generalize. According to this belief, unfortunately, the Occam program underlying our intelligence is itself unlikely to admit any more compact Occam program understanding it, and thus may be inherently not understandable. I don't quite agree with this perspective, though my view is pretty close. I also find it useful to view understanding in terms of algorithmic information, but I think that "finding the shortest program capable of computing X" is not a good way of conceptualizing "understanding X." Rather, I think that "finding the fuzzy set of programs capable of compressing X, relative to one's knowledge base K" is a better perspective. For a complex X, there will be many different programs capable of compressing X. Just finding the shortest program for computing X does not necessarily give a complete understanding of X. Ben's comments, and to some extent his approach to AGI of building code and then hoping when run it will produce a complex set of patterns that do stuff seems somewhat related to this, Yes, it's related The Novamente system can be viewed as attempting to find a bunch of compressing programs in relevant datasets, esp. in datasets of the form "carrying out action A in context C will lead to achievement of goal G." This is not necessarily the most useful way to view the system in practice, but it is a correct way. And of course, in accordance with the "no free lunch theorem", the idea is that it should be good at finding compressing programs in datasets of the above form that actually occur in the practical life of an embodied agent, not in mathematically general datasets of the above form. except for some reason he stipulates that human intelligence is understandable. I'm not clear on why he thinks human level intelligence is "understandable", or even what he means by this. I've tried to clarify this above. What I mean is that humans are able to detect many meaningful patterns (read: patterns = compressing programs, if you like) in human behaviors ... and once brain scans work better, I bet we will be able to detect many very meaningful, significant patterns emergent btw human behaviors and the output of brain scanners... OTOH, for a massively superhuman AI, the quantity of patterns we will be able to detect in this way, may be far far less because most of the significant patterns in its behavior and state may have an algorithmic information far beyond the capacity of our brains. -- Ben G - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
Re: [agi] Low I.Q. AGI
Pei> According to my belief, the way to create AGI is to have a Pei> general theory of intelligence, which should cover the common Pei> principle under all kinds of intelligent systems, including human Pei> intelligence, computer intelligence, etc., even alien Pei> intelligence and superhuman AGI. Therefore, this theory should Pei> also cover your AGI0 to AGIn. According to my belief, that I also claim to have published a strong case for, we have such a theory, in which the common principle underlying intelligence is that of Occam programs, that are computationally hard to extract. (I don't mean a program in the Occam language, a program constructed according to an extrapolated Occam's razor.) Also according to this belief, "understanding" is comprised of having such an Occam program that exploits underlying structure in order to generalize. According to this belief, unfortunately, the Occam program underlying our intelligence is itself unlikely to admit any more compact Occam program understanding it, and thus may be inherently not understandable. According to this picture, if we can succeed in creating a Human Level Intelligence (according to this picture, there roughly speaking doesn't exist any truly "general" intelligence) the way we will do that will be by building some structures/code that then computes and builds other structures/code that comprises the code of the Human Level Intelligence. The actual Human Level Intelligence will likely not be understandable in any meaningful sense. Ben's comments, and to some extent his approach to AGI of building code and then hoping when run it will produce a complex set of patterns that do stuff seems somewhat related to this, except for some reason he stipulates that human intelligence is understandable. I'm not clear on why he thinks human level intelligence is "understandable", or even what he means by this. Richard> efforts (some people seem to think that there is something Richard> inherently impossible about a human being able to design Richard> something smarter than itself, but that idea is really just Richard> science-fiction hearsay, not grounded in any real Richard> limitations). Well, no it is grounded in real limitations. I doubt, Richard, that even you think you could "design" a human level intelligence by hand, any more than you could personally design a mirage jet, the blueprints for which filled a warehouse. At the very least you would want to use a computer, and write code for the computer, and have the computer do a lot of the design for you by running the code. At the end of that process, you wouldn't necessarily "understand" much about how that design worked. And if the very guts of the reason that design worked are because it contains programs that were output by finding approximate solutions to computationally intractable problems, you'd be in real trouble. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
Re: [agi] Low I.Q. AGI
Eugen wrote: Of course a point could be made that reconstructing function from structure (which in principle can be obtained from vitrified brain sections at arbitrary resolution) is less far off than AI bootstrap. I personally feel AI bootstrap is significantly closer, but arguing about relative timing of uncertain advances probably isn't that productive (at least, for it to be productive it would need to become a way more in-depth technical conversation) BTW I am not sure your statement about reconstructing function from structure being achievable via studying vitrified brain sections is correct. The problem is dynamics. Reconstructing brain function from a series of time-slices of brain-state, is a well-defined math problem, though perhaps a very hard computational problem. However, reconstructing the dynamics of a system from a single time-slice is a well-defined math problem only if one includes the laws of physics into the picture -- and we do not know how to apply the laws of physics to systems on the scale and complexity of brains in a computationally tractable way, with sufficient detail to reconstruct system dynamics from a single time-slice. And, of course vitrification does not give us a series of time-slices of brain-state. At best it gives us one time-slice of certain very important aspects of the state. Thus, I suspect that advances in brain scanning technology are what are going to give us the ability to reconstruct brain function. Give it another 15-25 years and we'll have the spatiotemporal resolution of brain scanning to gather the data whose analysis will allow us to really understand the beauty and absurdity of what goes on in our brains... -- Ben G -- Ben - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
Re: [agi] Low I.Q. AGI
On Sun, Apr 15, 2007 at 06:41:39PM -0400, Benjamin Goertzel wrote: >A key point is that, unlike a human, a well-architected AGI should be >able to easily increase its intelligence via adding memory, adding >faster processors, adding more processors, and so forth. As well as I see why a human wouldn't profit from enhancements, it's just most of them would require germline manipulation, or technology quite beyond of what is available today. >by analyzing its own processes and their flaws with far more accuracy >than any near-term brain scan... Of course a point could be made that reconstructing function from structure (which in principle can be obtained from vitrified brain sections at arbitrary resolution) is less far off than AI bootstrap. -- Eugen* Leitl http://leitl.org";>leitl http://leitl.org __ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
Re: [agi] Low I.Q. AGI
On 4/15/07, Pei Wang <[EMAIL PROTECTED]> wrote: Well, I surely don't mean AIXI type of theory. I believe that all kinds of intelligence can be explained as the capability of adaptation with insufficient knowledge and resources. I understand that you don't share this understanding of intelligence. I don't disagree with this statement, but I suspect that different QUANTITATIVE levels of insufficiency may lead to QUALITATIVELY different principles of intelligence... For the moment, however, it will be sufficient for us humans to understand the qualitative principles leading to human-level AGI. This is what I have tried to do in my own theoretical work. And, I think your work has been very valuable in advancing our understanding of these principles... -- Ben G - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
Re: [agi] Low I.Q. AGI
On 4/16/07, Pei Wang <[EMAIL PROTECTED]> wrote: A general theory of intelligence will not give us a detailed AGI design, but it will provide the assumptions and restrictions that such a design should follow, no matter how the implementation details are determined. Also, it will tell us why the traditional AI approaches failed. For these reasons, it is not trivial or vacuum. Absolutely - it's necessary for us to know e.g. cognitive science for the reasons you point out. I merely observe that while necessary, it is not close to being sufficient. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
Re: [agi] Low I.Q. AGI
A general theory of intelligence will not give us a detailed AGI design, but it will provide the assumptions and restrictions that such a design should follow, no matter how the implementation details are determined. Also, it will tell us why the traditional AI approaches failed. For these reasons, it is not trivial or vacuum. Pei On 4/15/07, Russell Wallace <[EMAIL PROTECTED]> wrote: On 4/16/07, Pei Wang <[EMAIL PROTECTED]> wrote: > According to my belief, the way to create AGI is to have a general > theory of intelligence, which should cover the common principle under > all kinds of intelligent systems, including human intelligence, > computer intelligence, etc., even alien intelligence and superhuman > AGI. Therefore, this theory should also cover your AGI0 to AGIn. Indeed we do have some such theories already. Thing is, any theory which covers such different things must necessarily say very little about any of them in particular. To again make use of the flight analogy, is there a theory that covers both a bird and an F-22? Well yes, aerodynamics. However, if you look at what you actually need to know to design an F-22, aerodynamics is only the tiniest fraction of it. You need to know (or the team collectively needs to know - it's too much for any one person) a vast amount about engines and fuels, metallurgy, electronics, manufacturability, operational procedures and a hundred other things I don't even know enough to list - all of which are peculiar to man-made aircraft and do not apply to birds. Nor is this state of affairs peculiar to flight - it applies to every complex artifact. It undoubtedly also applies to AGI. This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?&; - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
Re: [agi] Low I.Q. AGI
On 4/16/07, Pei Wang <[EMAIL PROTECTED]> wrote: According to my belief, the way to create AGI is to have a general theory of intelligence, which should cover the common principle under all kinds of intelligent systems, including human intelligence, computer intelligence, etc., even alien intelligence and superhuman AGI. Therefore, this theory should also cover your AGI0 to AGIn. Indeed we do have some such theories already. Thing is, any theory which covers such different things must necessarily say very little about any of them in particular. To again make use of the flight analogy, is there a theory that covers both a bird and an F-22? Well yes, aerodynamics. However, if you look at what you actually need to know to design an F-22, aerodynamics is only the tiniest fraction of it. You need to know (or the team collectively needs to know - it's too much for any one person) a vast amount about engines and fuels, metallurgy, electronics, manufacturability, operational procedures and a hundred other things I don't even know enough to list - all of which are peculiar to man-made aircraft and do not apply to birds. Nor is this state of affairs peculiar to flight - it applies to every complex artifact. It undoubtedly also applies to AGI. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
Re: [agi] Low I.Q. AGI
Well, I surely don't mean AIXI type of theory. I believe that all kinds of intelligence can be explained as the capability of adaptation with insufficient knowledge and resources. I understand that you don't share this understanding of intelligence. Pei On 4/15/07, Benjamin Goertzel <[EMAIL PROTECTED]> wrote: > > > According to my belief, the way to create AGI is to have a general > theory of intelligence, which should cover the common principle under > all kinds of intelligent systems, including human intelligence, > computer intelligence, etc., even alien intelligence and superhuman > AGI. Therefore, this theory should also cover your AGI0 to AGIn. > Ahhh Well, this gets at the crux of our disagreement. I have my doubts that such a theory is possible. I think it may be possible to create a general theory of "roughly human level intelligence" ... just as Hutter and colleagues seem to be hot on the trail of a general theory of "near infinite computational power intelligence". But I suspect that computer systems with processing power and memory vastly greater than humans but vastly less than is needed for algorithms like Hutter's AIXItl to be possible, will display forms of intelligence that aren't covered by either the Hutter-type theories, nor the theories covering roughly-human-level intelligence... True, there may be commonalities between sub-AIXItl-level superhuman, human level, and AIXItl level intelligences. But, I suspect the differences will be at least as dramatic... As examples, I think that the sorts of self, awareness and "perceived free will" that characterize human mind may not apply to all superhuman intelligences. I can certainly imagine a superhuman AGI whose cognition is governed by complex, emergent patterns that are beyond human comprehension. I might be able to understand in some general sense that its behaviors are guided by emergent patterns, that it seems to be engaged in calculating probabilities, etc. -- but the main structures and dynamics guiding its cognition might be new principles, which apply only to minds vastly smarter than humans and can't be grokked by mere human brains... -- Ben This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?&; - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
Re: [agi] Low I.Q. AGI
According to my belief, the way to create AGI is to have a general theory of intelligence, which should cover the common principle under all kinds of intelligent systems, including human intelligence, computer intelligence, etc., even alien intelligence and superhuman AGI. Therefore, this theory should also cover your AGI0 to AGIn. Ahhh Well, this gets at the crux of our disagreement. I have my doubts that such a theory is possible. I think it may be possible to create a general theory of "roughly human level intelligence" ... just as Hutter and colleagues seem to be hot on the trail of a general theory of "near infinite computational power intelligence". But I suspect that computer systems with processing power and memory vastly greater than humans but vastly less than is needed for algorithms like Hutter's AIXItl to be possible, will display forms of intelligence that aren't covered by either the Hutter-type theories, nor the theories covering roughly-human-level intelligence... True, there may be commonalities between sub-AIXItl-level superhuman, human level, and AIXItl level intelligences. But, I suspect the differences will be at least as dramatic... As examples, I think that the sorts of self, awareness and "perceived free will" that characterize human mind may not apply to all superhuman intelligences. I can certainly imagine a superhuman AGI whose cognition is governed by complex, emergent patterns that are beyond human comprehension. I might be able to understand in some general sense that its behaviors are guided by emergent patterns, that it seems to be engaged in calculating probabilities, etc. -- but the main structures and dynamics guiding its cognition might be new principles, which apply only to minds vastly smarter than humans and can't be grokked by mere human brains... -- Ben - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
Re: [agi] Low I.Q. AGI
On 4/15/07, Benjamin Goertzel <[EMAIL PROTECTED]> wrote: > > My points are: > > (1) AGI can be more intelligent than human in certain sense, but it > should still be understandable in principle. The AGI systems humans create will be understandable by humans in principle. Agreed. But let's call these AGI0 Then, AGI0 will create AGI1, which will be understandable by AGI0 in principle... And, AGI1 will create AGI2, which will be understandable by AGI1 in principle... etc. At what point will AGI_n be no longer understandable by humans in principle?? -- where by "understand in principle", I mean "understand in principle, given realistic bounds on the time and memory resources used to carry out this understanding" According to my belief, the way to create AGI is to have a general theory of intelligence, which should cover the common principle under all kinds of intelligent systems, including human intelligence, computer intelligence, etc., even alien intelligence and superhuman AGI. Therefore, this theory should also cover your AGI0 to AGIn. Pei > (2) Intelligence in AGI will continue to improve, both by human and by > AGI, but it will still take time. There is no reason to believe that > the time will be infinitely short. Not infinitely short, unless current physics is badly wrong in certain relevant respects. But if AGI1 can think 1000 times faster than a human, maybe AGI2 will be able to think 1 times as fast, etc. Infinite rate is not necessary for the result to be incomprehensibly rapid as compared to the human brain. > > Or are you doubting that a massively superhuman intelligence would be beyond > > the scope of understanding of ordinary, unaugmented humans? > > It depends on what you mean by "understanding" --- the general > principle or concrete behaviors. > My hypothesis is that for large n, AGI_n as defined above will likely obey general principles that humans are not able to understand assuming reasonable time and memory constraints on their understanding process. -- Ben G This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?&; - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
Re: [agi] Low I.Q. AGI
My points are: (1) AGI can be more intelligent than human in certain sense, but it should still be understandable in principle. The AGI systems humans create will be understandable by humans in principle. Agreed. But let's call these AGI0 Then, AGI0 will create AGI1, which will be understandable by AGI0 in principle... And, AGI1 will create AGI2, which will be understandable by AGI1 in principle... etc. At what point will AGI_n be no longer understandable by humans in principle?? -- where by "understand in principle", I mean "understand in principle, given realistic bounds on the time and memory resources used to carry out this understanding" (2) Intelligence in AGI will continue to improve, both by human and by AGI, but it will still take time. There is no reason to believe that the time will be infinitely short. Not infinitely short, unless current physics is badly wrong in certain relevant respects. But if AGI1 can think 1000 times faster than a human, maybe AGI2 will be able to think 1 times as fast, etc. Infinite rate is not necessary for the result to be incomprehensibly rapid as compared to the human brain. > Or are you doubting that a massively superhuman intelligence would be beyond > the scope of understanding of ordinary, unaugmented humans? It depends on what you mean by "understanding" --- the general principle or concrete behaviors. My hypothesis is that for large n, AGI_n as defined above will likely obey general principles that humans are not able to understand assuming reasonable time and memory constraints on their understanding process. -- Ben G - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
Re: [agi] Low I.Q. AGI
On 4/15/07, Benjamin Goertzel <[EMAIL PROTECTED]> wrote: Pei, A key point is that, unlike a human, a well-architected AGI should be able to easily increase its intelligence via adding memory, adding faster processors, adding more processors, and so forth. As well as by analyzing its own processes and their flaws with far more accuracy than any near-term brain scan... Sure, these factors will increase the system's capability, though not change its working principle. However, to say "intelligence will continue to > evolve" and "there will be a moment after which things will completely > go beyond our understanding" are not the same. True, they're not the same It is a reasonable hypothesis that AGIs created by humans will find themselves unable -- even after a lot of self-study and a lot of hardware improvement augmentation -- to dramatically transcend the human level of intelligence. I.e., the idea of human-created algorithms bootstrapping beyond the human level could be infeasible. This seems highly unlikely to me, but I can't see it's an idiotic hypothesis. Is the above the hypothesis you're making? Not exactly. My points are: (1) AGI can be more intelligent than human in certain sense, but it should still be understandable in principle. (2) Intelligence in AGI will continue to improve, both by human and by AGI, but it will still take time. There is no reason to believe that the time will be infinitely short. Or are you doubting that a massively superhuman intelligence would be beyond the scope of understanding of ordinary, unaugmented humans? It depends on what you mean by "understanding" --- the general principle or concrete behaviors. Pei Ben This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?&; - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
Re: [agi] Low I.Q. AGI
Pei, A key point is that, unlike a human, a well-architected AGI should be able to easily increase its intelligence via adding memory, adding faster processors, adding more processors, and so forth. As well as by analyzing its own processes and their flaws with far more accuracy than any near-term brain scan... However, to say "intelligence will continue to evolve" and "there will be a moment after which things will completely go beyond our understanding" are not the same. True, they're not the same It is a reasonable hypothesis that AGIs created by humans will find themselves unable -- even after a lot of self-study and a lot of hardware improvement augmentation -- to dramatically transcend the human level of intelligence. I.e., the idea of human-created algorithms bootstrapping beyond the human level could be infeasible. This seems highly unlikely to me, but I can't see it's an idiotic hypothesis. Is the above the hypothesis you're making? Or are you doubting that a massively superhuman intelligence would be beyond the scope of understanding of ordinary, unaugmented humans? Ben - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
Re: [agi] Low I.Q. AGI
On 4/15/07, Pei Wang <[EMAIL PROTECTED]> wrote: I actually agree with most of what Richard and Ben said, that is, we can create AI that is "more intelligent", in some sense, than human beings --- that is also what I've been working on. However, to me "Singularity" is a stronger claim than "superhuman intelligence". It implies that the intelligence of AI will increase exponentially, to a point that is shorter than what we can perceive or understand. That is what I'm not convinced. I too generally agree with the improving intelligence scenario Richard described, but would like to point out a rarely appreciated aspect: While such a machine intelligence will quickly far exceed human capabilities, from its own perspective it will rapidly hit a wall due to having exhausted all opportunities for effective interaction with its environment. It could then explore an open-ended possibility space à la schmidhuber, but such increasingly detached exploration will be increasingly detached from "intelligence" in an effective sense. - Jef - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
Re: [agi] Low I.Q. AGI
On 4/15/07, Eugen Leitl <[EMAIL PROTECTED]> wrote: Do you think a dog has a good understanding of your daily activities? How about a field mouse? A cyanobacterium? I don't think so. However, when we understand "intelligence" well enough to build an AGI, we will be able to understand in principle how a superhuman intelligence works, though we cannot predict or explain its individual actions. Why should the current status quo be the crown of evolutionary infoprocessing achievement? Did I suggest that it should be the case? I thought I said the opposite in my message. However, to say "intelligence will continue to evolve" and "there will be a moment after which things will completely go beyond our understanding" are not the same. Pei - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
Re: [agi] Low I.Q. AGI
On Sun, Apr 15, 2007 at 07:40:03AM -0700, Eric B. Ramsay wrote: >There is an easy assumption of most writers on this board that once >the AGI exists, it's route to becoming a singularity is a sure thing. The singularity is just an rather arbitrary cutoff on the advancing horizont of predictability. We're soaking in a process with multiple positive feedback loops right now. You'll never notice you've passed Schwarzschild's radius when falling into Sagittarius A either. >Why is that? In humans there is a wide range of "smartness" in >the population. People face intellectual thresholds that they cannot But you can't pick up the smart ones, and make a few million copies of them, if you have a nice personal project. >cross because they just do not have enough of this smartness thing. >Although as a physicist I understand General Relativity, I really >doubt that if it had been left up to me that it would ever have been >discovered - no matter how much time I was given. Do neuroscientists The dog run for a million years never discovering GR is a more canonical example. But it takes a minimal intelligence in order to start manipulating intelligence. >know where this talent difference comes from in terms of brain You could scan a vitrified brain of a freshly dead and cryonically suspended expert with an arbitrary resolution. The information is certainly in there. >structure? Where in the designs for other AGI (Ben's for example) is >the smartness of the AGI designed in? I can see how an awareness may Let's say I give you a knob which would slowly mushroom your neocortex. It would just insert a new neuron between the other ones. Do you think you would notice something, after a few years? >bubble up from a design but this diesn't mean a system smart enough to >move itself towards being a singularity. Even if you feed the system Evolution is dumb as a rock, yet it produced you who is capable of producing that symbol of strings, distributed across a planet and understood by similiarly constructed systems. We certainly can do what evolution did, and maybe a bit more. >all the information in the world, it would know a lot but not be any >smarter or even know how to make itself smarter. How many years of >training will we give a brand new AGI before we decide it's retarded? How about a self-selecting population of a few trillions. The cybervillage idiots will never even be a single screen blip. -- Eugen* Leitl http://leitl.org";>leitl http://leitl.org __ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
Re: [agi] Low I.Q. AGI
On Sun, Apr 15, 2007 at 02:06:52PM -0400, Pei Wang wrote: > I actually agree with most of what Richard and Ben said, that is, we > can create AI that is "more intelligent", in some sense, than human > beings --- that is also what I've been working on. Ability to instantiate vanilla experts at the drop of the hat (without having to spend some 30 years, megabucks and a failure rate of 98%) is a major advantage already. In fact, automation about as smart as an insect can completely transform transportation, manufacturing, military, and a few other disciplines, leaving people to deal with with more worthwhile tasks (specialization is for insects). > However, to me "Singularity" is a stronger claim than "superhuman > intelligence". It implies that the intelligence of AI will increase > exponentially, to a point that is shorter than what we can perceive or A worm can 0wn the entire planetary infrastructure within minutes. http://www.caida.org/publications/papers/2003/sapphire/sapphire.html "The Sapphire Worm was the fastest computer worm in history. As it began spreading throughout the Internet, it doubled in size every 8.5 seconds. It infected more than 90 percent of vulnerable hosts within 10 minutes." An initial AI will be maladapted. Meaning, it will take a lot less resources to run than to bootstrap. Additionally, annecting online resources by remote exploits of known and unknown vulnerabilities allows very short doubling rates (even considered the agent size). There's not a lot of hardware online right now, and the bandwidth/latency is rather negligible, especially at the edges, but in 50, 100 years... I see no reason why a distinct successor of a game console won't have enough resources for a human equivalent. There will be many billions of such nodes on the network, maybe trillions. It's quite easy to see that dedicated hardware could do in ~ns what biology does in ~ms, which is six orders of magnitude apart. A million days is about three kiloyears. Even if one is not religious in way of linear semi-log plots, at the current rate of progress acceleration some megayear times a few billions human equivalents is nothing to sneeze at. Things are slower on the physical layer, but not that much slower. I could see hardware doubling rates in hours to days, as in: built from scratch, not by remote annection. It might be slow if speed of light is about the speed of sound subjectively, but for us normal folks it could well be a) terrifying b) terminal. > understand. That is what I'm not convinced. Do you think a dog has a good understanding of your daily activities? How about a field mouse? A cyanobacterium? Why should the current status quo be the crown of evolutionary infoprocessing achievement? > Pei > -- Eugen* Leitl http://leitl.org";>leitl http://leitl.org __ ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org 8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
Re: [agi] Low I.Q. AGI
I actually agree with most of what Richard and Ben said, that is, we can create AI that is "more intelligent", in some sense, than human beings --- that is also what I've been working on. However, to me "Singularity" is a stronger claim than "superhuman intelligence". It implies that the intelligence of AI will increase exponentially, to a point that is shorter than what we can perceive or understand. That is what I'm not convinced. Pei On 4/15/07, Benjamin Goertzel <[EMAIL PROTECTED]> wrote: I find that I agree with nearly all Loosemore's comments in his reply... I certainly agree with Pei that, in terms of spreading the AGI meme among researchers in academia and industry, focusing on the Singularity aspect is not good marketing. And, as a matter of pragmatic time-management, I am spending most of my "AGI R&D time" working on actually getting to the point of achieving advanced artificial cognition, rather than thinking about how to make an advanced AGI yet more advanced. (Though I do agree with Eliezer Yudkowsky and others that it is important to think about the ethics of advanced AGI's now, in advance of constructing them; and that one wants to think very deeply before creating an AGI that has significant potential to rapidly accelerate its own intelligence beyond the human level.) But, all these issues aside, I am close to certain that once we have a near-human-level AGI, then -- if we choose to effect a transition to superhuman-level AI -- it won't be a huge step to do so. And, I am close to certain that once we have a superhuman-level AGI, a host of other technologies like strong nanotech, genetic engineering, quantum computing etc. etc. will follows. Of course, this is all speculation and plenty of unknown things could go wrong. But, to me, the logic in favor of the above conclusions seems pretty solid. -- Ben G On 4/15/07, Richard Loosemore <[EMAIL PROTECTED]> wrote: > Eric B. Ramsay wrote: > > There is an easy assumption of most writers on this board that once the > > AGI exists, it's route to becoming a singularity is a sure thing. Why is > > that? In humans there is a wide range of "smartness" in the population. > > People face intellectual thresholds that they cannot cross because they > > just do not have enough of this smartness thing. Although as a > > physicist I understand General Relativity, I really doubt that if it had > > been left up to me that it would ever have been discovered - no matter > > how much time I was given. Do neuroscientists know where this talent > > difference comes from in terms of brain structure? Where in the designs > > for other AGI (Ben's for example) is the smartness of the AGI designed > > in? I can see how an awareness may bubble up from a design but this > > diesn't mean a system smart enough to move itself towards being a > > singularity. Even if you feed the system all the information in the > > world, it would know a lot but not be any smarter or even know how to > > make itself smarter. How many years of training will we give a brand new > > AGI before we decide it's retarded? > > Eric, > > I am going to address your question, as well as Pei's response that > there should not really be a direct relationship between AGI and the > Singularity. > > In the course of building an AGI, we (the designers of the AGI) will > have to understand a great deal about what makes an intelligence tick. > By the time we get anything working at all, we will know a lot more > about the workings of intelligence than we do now. > > Now, our first attempts to build a full intelligence will very probably > result in many test systems that have a "low IQ" -- a system that is not > capable of being as smart as its designers. > > If we were standing in front of a human with that kind of low IQ, we > would face a long, hard job (and in some cases, an impossible job) to > improve their intelligence. But that is most emphatically not the case > with a low-IQ AGI prototype. At the very least, we would be able to > inspect the system during actual thinking episodes, in order to get > clues about what goes right and what goes wrong. > > So, combining the knowledge we will have acquired during the design > phase with the vast amount of performance data available during > prototype phase, there are ample opportunities for us to improve the > design. Specifically, we will try to find out what ingredients are > needed to make the system extremely creative. (As well as extremely > balanced and friendly, of course). > > By this means, I believe there would be no substantial obstacles to our > getting the system up to the average human level of performance. I > cannot guarantee this, of course, but there are no in-principle reasons > why not. In fact, there are no reasons why we should not be able to get > it up to a superhuman level of performance just by our own R&D efforts > (some people seem to think that there is something inherently impossible > about a human being able
Re: [agi] Low I.Q. AGI
I find that I agree with nearly all Loosemore's comments in his reply... I certainly agree with Pei that, in terms of spreading the AGI meme among researchers in academia and industry, focusing on the Singularity aspect is not good marketing. And, as a matter of pragmatic time-management, I am spending most of my "AGI R&D time" working on actually getting to the point of achieving advanced artificial cognition, rather than thinking about how to make an advanced AGI yet more advanced. (Though I do agree with Eliezer Yudkowsky and others that it is important to think about the ethics of advanced AGI's now, in advance of constructing them; and that one wants to think very deeply before creating an AGI that has significant potential to rapidly accelerate its own intelligence beyond the human level.) But, all these issues aside, I am close to certain that once we have a near-human-level AGI, then -- if we choose to effect a transition to superhuman-level AI -- it won't be a huge step to do so. And, I am close to certain that once we have a superhuman-level AGI, a host of other technologies like strong nanotech, genetic engineering, quantum computing etc. etc. will follows. Of course, this is all speculation and plenty of unknown things could go wrong. But, to me, the logic in favor of the above conclusions seems pretty solid. -- Ben G On 4/15/07, Richard Loosemore <[EMAIL PROTECTED]> wrote: Eric B. Ramsay wrote: > There is an easy assumption of most writers on this board that once the > AGI exists, it's route to becoming a singularity is a sure thing. Why is > that? In humans there is a wide range of "smartness" in the population. > People face intellectual thresholds that they cannot cross because they > just do not have enough of this smartness thing. Although as a > physicist I understand General Relativity, I really doubt that if it had > been left up to me that it would ever have been discovered - no matter > how much time I was given. Do neuroscientists know where this talent > difference comes from in terms of brain structure? Where in the designs > for other AGI (Ben's for example) is the smartness of the AGI designed > in? I can see how an awareness may bubble up from a design but this > diesn't mean a system smart enough to move itself towards being a > singularity. Even if you feed the system all the information in the > world, it would know a lot but not be any smarter or even know how to > make itself smarter. How many years of training will we give a brand new > AGI before we decide it's retarded? Eric, I am going to address your question, as well as Pei's response that there should not really be a direct relationship between AGI and the Singularity. In the course of building an AGI, we (the designers of the AGI) will have to understand a great deal about what makes an intelligence tick. By the time we get anything working at all, we will know a lot more about the workings of intelligence than we do now. Now, our first attempts to build a full intelligence will very probably result in many test systems that have a "low IQ" -- a system that is not capable of being as smart as its designers. If we were standing in front of a human with that kind of low IQ, we would face a long, hard job (and in some cases, an impossible job) to improve their intelligence. But that is most emphatically not the case with a low-IQ AGI prototype. At the very least, we would be able to inspect the system during actual thinking episodes, in order to get clues about what goes right and what goes wrong. So, combining the knowledge we will have acquired during the design phase with the vast amount of performance data available during prototype phase, there are ample opportunities for us to improve the design. Specifically, we will try to find out what ingredients are needed to make the system extremely creative. (As well as extremely balanced and friendly, of course). By this means, I believe there would be no substantial obstacles to our getting the system up to the average human level of performance. I cannot guarantee this, of course, but there are no in-principle reasons why not. In fact, there are no reasons why we should not be able to get it up to a superhuman level of performance just by our own R&D efforts (some people seem to think that there is something inherently impossible about a human being able to design something smarter than itself, but that idea is really just science-fiction hearsay, not grounded in any real limitations). Okay, so if we assume that we can build a roughly-human-level intelligence, what next? The next phase is again very different to the case of having a human genius hanging around. [Aside. By 'genius' I just mean 'very bright compared with average' - I don't mean 'person with magically superhuman powers of intelligence and creativity']. This system will be capable of being augmented in a number of ways that are simply not possible with humans. Pure physical t
Re: [agi] Low I.Q. AGI
Eric B. Ramsay wrote: There is an easy assumption of most writers on this board that once the AGI exists, it's route to becoming a singularity is a sure thing. Why is that? In humans there is a wide range of "smartness" in the population. People face intellectual thresholds that they cannot cross because they just do not have enough of this smartness thing. Although as a physicist I understand General Relativity, I really doubt that if it had been left up to me that it would ever have been discovered - no matter how much time I was given. Do neuroscientists know where this talent difference comes from in terms of brain structure? Where in the designs for other AGI (Ben's for example) is the smartness of the AGI designed in? I can see how an awareness may bubble up from a design but this diesn't mean a system smart enough to move itself towards being a singularity. Even if you feed the system all the information in the world, it would know a lot but not be any smarter or even know how to make itself smarter. How many years of training will we give a brand new AGI before we decide it's retarded? Eric, I am going to address your question, as well as Pei's response that there should not really be a direct relationship between AGI and the Singularity. In the course of building an AGI, we (the designers of the AGI) will have to understand a great deal about what makes an intelligence tick. By the time we get anything working at all, we will know a lot more about the workings of intelligence than we do now. Now, our first attempts to build a full intelligence will very probably result in many test systems that have a "low IQ" -- a system that is not capable of being as smart as its designers. If we were standing in front of a human with that kind of low IQ, we would face a long, hard job (and in some cases, an impossible job) to improve their intelligence. But that is most emphatically not the case with a low-IQ AGI prototype. At the very least, we would be able to inspect the system during actual thinking episodes, in order to get clues about what goes right and what goes wrong. So, combining the knowledge we will have acquired during the design phase with the vast amount of performance data available during prototype phase, there are ample opportunities for us to improve the design. Specifically, we will try to find out what ingredients are needed to make the system extremely creative. (As well as extremely balanced and friendly, of course). By this means, I believe there would be no substantial obstacles to our getting the system up to the average human level of performance. I cannot guarantee this, of course, but there are no in-principle reasons why not. In fact, there are no reasons why we should not be able to get it up to a superhuman level of performance just by our own R&D efforts (some people seem to think that there is something inherently impossible about a human being able to design something smarter than itself, but that idea is really just science-fiction hearsay, not grounded in any real limitations). Okay, so if we assume that we can build a roughly-human-level intelligence, what next? The next phase is again very different to the case of having a human genius hanging around. [Aside. By 'genius' I just mean 'very bright compared with average' - I don't mean 'person with magically superhuman powers of intelligence and creativity']. This system will be capable of being augmented in a number of ways that are simply not possible with humans. Pure physical technology advances will promise the uploading of the original system into faster hardware ... so even if we and it NEVER did another stroke of work to improve its intelligence, we might find that it would get faster every time an electronic hardware upgrade became available. After a few years, it might be able to operate a thousand times faster than humans purely because of this factor. Second factor: Duplication. The original AGI (with full adult intelligence) could be duplicated in such a way that, for every genius machine we produce, we could build a thousand copies and get them (persuade them) to work together as a team. That is significant: human geniuses are rare, so what would happen if we could take an adult Einstein and quickly make a thousand similar brains? Never possible with humans: entirely feasible with a smart AGI. Third factor: Communication bandwidth. This huge team of genius AGIs would be able to talk to each other at rates that we can hardly even imagine. Human teams tend to suffer from problems when they become too large: some of those problems could be overcome because the AGI team would all (effectively) be in 'telepathic' contact with one another ... able to exchange ideas and inspiration without having to go through managers and committee meetings. Result: the AGI team of a thousand geniuses would be able to work at
Re: [agi] Low I.Q. AGI
Peter, Many of the arguments of "AGI will necessarily lead to singularity" that I've seen are based on either simple extrapolation of history or wrong conception of intelligence (i.e., the assumption that an AGI will necessarily know how to design a better AGI). Of course I cannot prove that singularity is impossible. However, I think we have too little evidence to talk about it seriously, and to bundle the notion of singularity with AGI will not serve us well as the current time. Therefore, I'd rather discuss AGI than singularity. ;-) Pei On 4/15/07, Peter Voss <[EMAIL PROTECTED]> wrote: Pei, what are yours? -Original Message- From: Pei Wang [mailto:[EMAIL PROTECTED] Sent: Sunday, April 15, 2007 8:32 AM To: [EMAIL PROTECTED] Subject: Re: [agi] Low I.Q. AGI On 4/15/07, Eric B. Ramsay <[EMAIL PROTECTED]> wrote: > There is an easy assumption of most writers on this board that once the AGI > exists, it's route to becoming a singularity is a sure thing. I'm not sure whether this assumption is really shared by most of the people here. At least I don't think AGI leads to singularity, though my reason is not the same as yours. Pei > Why is that? > In humans there is a wide range of "smartness" in the population. People > face intellectual thresholds that they cannot cross because they just do not > have enough of this smartness thing. Although as a physicist I understand > General Relativity, I really doubt that if it had been left up to me that it > would ever have been discovered - no matter how much time I was given. Do > neuroscientists know where this talent difference comes from in terms of > brain structure? Where in the designs for other AGI (Ben's for example) is > the smartness of the AGI designed in? I can see how an awareness may bubble > up from a design but this diesn't mean a system smart enough to move itself > towards being a singularity. Even if you feed the system all the information > in the world, it would know a lot but not be any smarter or even know how to > make itself smarter. How many years of training will we give a brand new AGI > before we decide it's retarded? - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?&; - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
RE: [agi] Low I.Q. AGI
Pei, what are yours? -Original Message- From: Pei Wang [mailto:[EMAIL PROTECTED] Sent: Sunday, April 15, 2007 8:32 AM To: [EMAIL PROTECTED] Subject: Re: [agi] Low I.Q. AGI On 4/15/07, Eric B. Ramsay <[EMAIL PROTECTED]> wrote: > There is an easy assumption of most writers on this board that once the AGI > exists, it's route to becoming a singularity is a sure thing. I'm not sure whether this assumption is really shared by most of the people here. At least I don't think AGI leads to singularity, though my reason is not the same as yours. Pei > Why is that? > In humans there is a wide range of "smartness" in the population. People > face intellectual thresholds that they cannot cross because they just do not > have enough of this smartness thing. Although as a physicist I understand > General Relativity, I really doubt that if it had been left up to me that it > would ever have been discovered - no matter how much time I was given. Do > neuroscientists know where this talent difference comes from in terms of > brain structure? Where in the designs for other AGI (Ben's for example) is > the smartness of the AGI designed in? I can see how an awareness may bubble > up from a design but this diesn't mean a system smart enough to move itself > towards being a singularity. Even if you feed the system all the information > in the world, it would know a lot but not be any smarter or even know how to > make itself smarter. How many years of training will we give a brand new AGI > before we decide it's retarded? - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
Re: [agi] Low I.Q. AGI
On 4/15/07, Eric B. Ramsay <[EMAIL PROTECTED]> wrote: There is an easy assumption of most writers on this board that once the AGI exists, it's route to becoming a singularity is a sure thing. I'm not sure whether this assumption is really shared by most of the people here. At least I don't think AGI leads to singularity, though my reason is not the same as yours. Pei Why is that? In humans there is a wide range of "smartness" in the population. People face intellectual thresholds that they cannot cross because they just do not have enough of this smartness thing. Although as a physicist I understand General Relativity, I really doubt that if it had been left up to me that it would ever have been discovered - no matter how much time I was given. Do neuroscientists know where this talent difference comes from in terms of brain structure? Where in the designs for other AGI (Ben's for example) is the smartness of the AGI designed in? I can see how an awareness may bubble up from a design but this diesn't mean a system smart enough to move itself towards being a singularity. Even if you feed the system all the information in the world, it would know a lot but not be any smarter or even know how to make itself smarter. How many years of training will we give a brand new AGI before we decide it's retarded? This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?&; - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
[agi] Low I.Q. AGI
There is an easy assumption of most writers on this board that once the AGI exists, it's route to becoming a singularity is a sure thing. Why is that? In humans there is a wide range of "smartness" in the population. People face intellectual thresholds that they cannot cross because they just do not have enough of this smartness thing. Although as a physicist I understand General Relativity, I really doubt that if it had been left up to me that it would ever have been discovered - no matter how much time I was given. Do neuroscientists know where this talent difference comes from in terms of brain structure? Where in the designs for other AGI (Ben's for example) is the smartness of the AGI designed in? I can see how an awareness may bubble up from a design but this diesn't mean a system smart enough to move itself towards being a singularity. Even if you feed the system all the information in the world, it would know a lot but not be any smarter or even know how to make itself smarter. How many years of training will we give a brand new AGI before we decide it's retarded? - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936