Humans have a built in animal drive system (emotion & the pleasure/pain dichotomy), which works in tandem with the goal-less observation system tha constitutes our intelligence. Without drive to give direction and precedence to choices of behavior, I don't imagine the intelligence we exhibit would actually do anything. We would be difficult to control in the way a large boulder is difficult to control -- we would be inert. How does the AGI machine you propose decide what to do with the regularities it finds in the incoming sensory data? Or is it also inert?
On Fri, Aug 24, 2012 at 9:40 AM, Sergio Pissanetzky <ser...@scicontrols.com>wrote: > Matt, > > Understood. I suggest an entropy approach, based on the observation that > entropy reduction causes self-organization and the formation of patterns. > To > my knowledge, this has never been tried before, except by me. I have reason > to believe that our brains work that way. > > The AGI machine I propose consists of an entropy processor with memory, > input and output, that's all. No computer, no program, except that almost > certainly the entropy processor will be a computer programmed for that > task. > Completely problem-independent and data-agnostic. Everything else goes in > as > data. It works, within my limitations, and I am trying to build a larger > one > with an FPGA. > > One major difference with current AGI attempts, is that my AGI can not be > controlled. Your only interaction with it is to give it information. You > can > see considerable similarities with humans. > > Sergio > > -----Original Message----- > From: Matt Mahoney [mailto:mattmahone...@gmail.com] > Sent: Friday, August 24, 2012 9:17 AM > To: AGI > Subject: Re: [agi] Hugo de Garis on the Singhilarity Institute and the > hopelessness of Friendly AI ... > > On Fri, Aug 24, 2012 at 9:52 AM, Sergio Pissanetzky < > ser...@scicontrols.com> > wrote: > > No it's not. Because Watson and its program have been developed by > > humans. I meant Google, as a machine, without any humans writing a > > program and telling it how to learn to play chess. > > So I guess what you want is a machine where you can describe the rules of > chess or any other game using English words, and it will learn to play the > game. That's a language modeling problem. It's one of the hard problems of > AI that we haven't solved yet, along with vision, hearing, robotics, music, > art, humor, and some others. I have no reason to believe that these > problems > won't be solved eventually. It will probably require a lot of computing > power and a lot of human effort in programming and training. What do you > suggest? > > > -- Matt Mahoney, mattmahone...@gmail.com > > > ------------------------------------------- > AGI > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/18883996-f0d58d57 > Modify Your Subscription: > https://www.listbox.com/member/?& > d2 > Powered by Listbox: http://www.listbox.com > > > > > > ------------------------------------------- > AGI > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/23050605-bcb45fb4 > Modify Your Subscription: > https://www.listbox.com/member/?& > Powered by Listbox: http://www.listbox.com > ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968 Powered by Listbox: http://www.listbox.com