It's just that something like world hunger is so complex AGI would have to master simpler problems. Also, there are many people and institutions that have solutions to world hunger already and they get ignored. So an AGI would have to get established over a period of time for anyone to really care what it has to say about these types of issues. It could simulate things and come up with solutions but they would not get implemented unless it had power to influence. So in addition AGI would need to know how to make people listen... and maybe obey.
IMO I think AGI will take the embedded route - like other types of computer systems - IRS, weather, military, Google, etc. and we become dependent intergenerationally so that it is impossible to survive without. At that point AGI's will have power to influence. John From: Ian Parker [mailto:ianpark...@gmail.com] Sent: Saturday, June 26, 2010 2:19 PM To: agi Subject: Re: [agi] The problem with AGI per Sloman Actually if you are serious about solving a political or social question then what you really need is CRESS <http://cress.soc.surrey.ac.uk/web/home> . The solution of World Hunger is BTW a political question not a technical one. Hunger is largely due to bad governance in the Third World. How do you get good governance. One way to look at the problem is via CRESS and run simulations in second life. One thing which has in fact struck me in my linguistic researches is this. Google Translate is based on having Gigabytes of bilingual text. The fact that GT is so bad at technical Arabic indicates the absence of such bilingual text. Indeed Israel publishes more papers than the whole of the Islamic world. This is of profound importance for understanding the Middle East. I am sure CRESS would confirm this. AGI would without a doubt approach political questions by examining all the data about the various countries before making a conclusion. AGI would probably be what you would consult for long term solutions. It might not be so good at dealing with something (say) like the Gaza flotilla. In coing to this conclusion I have the University of Surrey and CRESS in mind. - Ian Parker On 26 June 2010 14:36, John G. Rose <johnr...@polyplexic.com> wrote: > -----Original Message----- > From: Ian Parker [mailto:ianpark...@gmail.com] > > > How do you solve World Hunger? Does AGI have to. I think if it is truly "G" it > has to. One way would be to find out what other people had written on the > subject and analyse the feasibility of their solutions. > > Yes, that would show the generality of their AGI theory. Maybe a particular AGI might be able to work with some problems but plateau out on its intelligence for whatever reason and not be able to work on more sophisticated issues. An AGI could be "hardcoded" perhaps and not improve much, whereas another AGI might improve to where it could tackle vast unknowns at increasing efficiency. There are common components in tackling unknowns, complexity classes for example, but some AGI systems may operate significantly more efficiently and improve. Human brains at some point may plateau without further augmentation though I'm not sure we have come close to what the brain is capable of. John ------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/? <https://www.listbox.com/member/?&> & Powered by Listbox: http://www.listbox.com agi | <https://www.listbox.com/member/archive/303/=now> Archives <https://www.listbox.com/member/archive/rss/303/> | <https://www.listbox.com/member/?&> Modify Your Subscription <http://www.listbox.com> ------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c Powered by Listbox: http://www.listbox.com