Ben, thanks for the answer! Are you saying that organisations do some bad stuff not because that stuff is "profitable", but because they do not have significant intelligence? (As you seem to have measured, they have about human-level intelligence - by the way, how do you measure that?)
And when organisations (in other words, human-level AGI-s that already do have agency) and DeepMind-like technology (that has or will have additional intelligence, but does not have agency of its own) combine, will they have higher intelligence combined with agency that you deem necessary for true AGI? - And, therefore they will be "safer and friendlier" for their human "clients"? Is it so -? Roland On Mon, Oct 17, 2016 at 6:38 AM, Ben Goertzel <[email protected]> wrote: > Human organizations like corporations are intelligences, but pretty > different from AGI systems, in that so much of their intelligence does > come from the individuals within them.... They are fascinating > complex beasts, but their overall GI is limited by that of the humans > comprising them, it seems... > > As for Deep Mind's recent, interesting work, I have repeatedly warned > against the following erroneous line of reasoning: > > -- Method X, in principle, could encompass every sort of GI, if given > sufficiently massive resources > > -- Method X, using a fairly modest amount of resources, can solve some > interesting specific problems > > -- Therefore, method X, given a large but feasible amount of > resources, can solve human-level AGI > > This sort of thinking seems appealing but often is false.. the nature > of AI is that a lot of algorithms are a) omni-capable given sufficient > resources, b) capable of doing some fairly interesting stuff with > modest resources, c) nevertheless not viable for human-level AI using > real-world resources > > I strongly suspect Google Deep Mind's "differentiable symbolic NNs" > are in this category > > To be an autonomous agent , understanding new problems in real-world > context, requires a lot more than puzzle-solving based on perceptual > pattern recognition and long-term declarative and spatial memory.... > > This recent Deep Mind stuff is a good step forward, but still a small > step in the context of the overall problem of human-level AGI. > Adding a simplified model of aspects of hippocampus to a simplified > model of aspects of visual and auditory cortex is cool, but still a > small fraction of a whole loosely-simulated brain > > -- Ben G > > On Sun, Oct 16, 2016 at 2:47 PM, Roland Pihlakas / Simplify.ee > <[email protected]> wrote: >> Ben, thanks for the answer! >> In short, do you say, agency is the main thing what is currently >> missing from the DeepMind thing and maybe also from other AGI >> attempts? Did I understand you correctly? >> Regarding agency, I have had the following thoughts: >> What about corporations or other kinds of somewhat bigger >> organisations? Do they have agency in some suitable meaning of the >> word? Then we do have agency in some artificial creations after all. >> What about a viewpoint, that organisations are already an old form of >> AGI? They are relatively autonomous from the humans working inside >> them. No person can grasp or change things going on in there too much. >> We humans are just cogs in there, human processors for artificially >> intelligent software. The organisations have a kind of mind of their >> own - their own laws of survival. >> And then, in conclusion, combining the agency with the new technology, >> like the DeepMind one referenced, do the "organisation-AGIs" become >> even more powerful and autonomous forms of AGI? Does my reasoning look >> appropriate? >> If so, are these forms of AGI "safe and friendly" too? >> If they are not "safe and friendly", then what can be done to achieve >> that? And to do that before the task becomes even more complicated >> when the organisations become even more powerful and autonomous with >> the help from new technology? Apparently being "safe and friendly" is >> not then so much a software-level problem anymore... >> What do you think? >> >> Roland >> >> >> On Sat, Oct 15, 2016 at 8:05 PM, Ben Goertzel <[email protected]> wrote: >>> I read the paper ... it’s interesting stuff, but it certainly isn't >>> AGI ... Whether it's a meaningful step toward AGI or not depends on >>> your research paradigm... >>> >>> They add a long-term memory ty to their standard deep-NN architecture >>> … and in order to be able to do backprop thru their LTM read and write >>> operations, they do LTM read-write in terms of multiplication by >>> weight-vectors... >>> >>> Very vaguely, if one views standard deep-NNs as models of aspects of >>> cortex, one can view their matrix-LTM as a model of aspects of >>> hippocampus >>> >>> However, the reliance on backprop is still un-good IMO, and they are >>> still operating in a “one network, one task” >>> standard-reinforcement-learning paradigm… they’re not trying to build >>> a persistent agent >>> >>> Still it’s definitely something new and interesting, rather than just >>> one more application of CNNs to some new big glob of data... >>> >>> I saw nothing quite so innovative at the OpenAI unconference last week >>> in San Francisco, for example ... though there were lots of people >>> there doing interesting high-quality stuff... >>> >>> What is missing from AGI here? How about, being a persistent agent >>> that creatively solves new problems it confronts in its environment, >>> without careful preparation on a per-problem basis by human >>> programmers? ;p >>> >>> ben >>> >>> >>> On Sat, Oct 15, 2016 at 8:55 AM, Roland Pihlakas / Simplify.ee >>> <[email protected]> wrote: >>>> Hello >>>> >>>> https://deepmind.com/blog/differentiable-neural-computers/ >>>> According to my theory this is almost as good as AGI. >>>> Perhaps except the learning speed. >>>> I am curious, what do you AGI-builders here see there missing from AGI? >>>> And secondly, perhaps more importantly - what will happen now? >>>> >>>> Thanks: >>>> Roland >>>> >>>> >>>> ------------------------------------------- >>>> AGI >>>> Archives: https://www.listbox.com/member/archive/303/=now >>>> RSS Feed: https://www.listbox.com/member/archive/rss/303/212726-deec6279 >>>> Modify Your Subscription: https://www.listbox.com/member/?& >>>> Powered by Listbox: http://www.listbox.com >>> >>> >>> >>> -- >>> Ben Goertzel, PhD >>> http://goertzel.org >>> >>> Super-benevolent super-intelligence is the thought the Global Brain is >>> currently struggling to form... >>> >>> >>> ------------------------------------------- >>> AGI >>> Archives: https://www.listbox.com/member/archive/303/=now >>> RSS Feed: https://www.listbox.com/member/archive/rss/303/3603240-6d7f0074 >>> Modify Your Subscription: https://www.listbox.com/member/?& >>> Powered by Listbox: http://www.listbox.com >> >> >> ------------------------------------------- >> AGI >> Archives: https://www.listbox.com/member/archive/303/=now >> RSS Feed: https://www.listbox.com/member/archive/rss/303/212726-deec6279 >> Modify Your Subscription: https://www.listbox.com/member/?& >> Powered by Listbox: http://www.listbox.com > > > > -- > Ben Goertzel, PhD > http://goertzel.org > > Super-benevolent super-intelligence is the thought the Global Brain is > currently struggling to form... > > > ------------------------------------------- > AGI > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/3603240-6d7f0074 > Modify Your Subscription: https://www.listbox.com/member/?& > Powered by Listbox: http://www.listbox.com ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
