[agi] Probabilty Processor

2010-08-17 Thread Jan Klauck
--- quotes The US Defense Advanced Research Projects Agency financed the basic research necessary to create a processor that thinks in terms of probabilities instead of the certainties of ones and zeros. (...) So we have been rebuilding probability computing from the gate level all the way up to

Re: [agi] AGI Alife

2010-08-01 Thread Jan Klauck
Ian Parker wrote I would like your opinion on *proofs* which involve an unproven hypothesis, I've no elaborated opinion on that. --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/

Re: [agi] AGI Int'l Relations

2010-08-01 Thread Jan Klauck
Ian Parker wrote McNamara's dictum seems on the face of it to contradict the validity of Psychology as a science. I don't think so. That in unforseen events people switch to improvisation isn't suprising. Even an AGI, confronted with a novel situation and lacking data and models and rules for

Re: [agi] AGI Int'l Relations

2010-08-01 Thread Jan Klauck
Steve Richfield wrote Have you ever taken a dispute, completely deconstructed it to determine its structure, engineered a prospective solution, and attempted to implement it? No. How can you, the participants on this forum, hope to ever bring stability That depends on your definition of

Re: [agi] AGI Int'l Relations

2010-08-01 Thread Jan Klauck
Steve Richfield wrote I suspect that this tool could work better than any AGI in the absence of such a tool. I see an AGI more as a support tool that collects and assesses data, creates and evaluates hypotheses, develops goals and plans how to reach them and assists people with advice. The

[agi] AGI Int'l Relations

2010-07-30 Thread Jan Klauck
(If you don't have time to read all this, scroll down to the questions.) I'm writing an article on the role of intelligent systems in the field of International Relations (IR). Why IR? Because in today's (and more so in tomorrow's) world the majority of national policies is influenced by foreign

Re: [agi] AGI Alife

2010-07-30 Thread Jan Klauck
Ian Parker wrote Then define your political objectives. No holes, no ambiguity, no forgotten cases. Or does the AGI ask for our feedback during mission? If yes, down to what detail? With Matt's ideas it does exactly that. How does it know when to ask? You give it rules, but those rules can

Re: [agi] AGI Int'l Relations

2010-07-30 Thread Jan Klauck
Ian Parker wrote games theory It produced many studies, many strategies, but they weren't used that much in the daily business. It's used more as a general guide. And in times of crisis they preferred to rely on gut feelings. E.g., see http://en.wikipedia.org/wiki/The_Fog_of_War How do you

Re: [agi] AGI Alife

2010-07-28 Thread Jan Klauck
Ian Parker wrote There are the military costs, Do you realize that you often narrow a discussion down to military issues of the Iraq/Afghanistan theater? Freeloading in social simulation isn't about guys using a plane for free. When you analyse or design a system you look for holes in the

Re: [agi] Tweaking a few parameters

2010-07-28 Thread Jan Klauck
A. T. Murray wrote Robot: I AM ANDRU Robot: I AM ANDRU Robot: ANDRU HELPS KIDS Robot: KIDS MAKE ROBOTS Robot: ROBOTS NEED ME Robot: I IS I Robot: I AM ANDRU Robot: ANDRU HELPS KIDS Robot: KIDS MAKE ROBOTS For the first time in our dozen-plus years of developing MindForth, the

Re: [agi] AGI Alife

2010-07-28 Thread Jan Klauck
Ian Parker wrote What we would want in a *friendly* system would be a set of utilitarian axioms. If we program a machine for winning a war, we must think well what we mean by winning. (Norbert Wiener, Cybernetics, 1948) It is also important that AGI is fully axiomatic and proves that 1+1=2

Re: [agi] AGI Alife

2010-07-28 Thread Jan Klauck
Ian Parker wrote If we program a machine for winning a war, we must think well what we mean by winning. I wasn't thinking about winning a war, I was much more thinking about sexual morality and men kissing. If we program a machine for doing X, we must think well what we mean by X. Now

Re: [agi] AGI Alife

2010-07-27 Thread Jan Klauck
Linas Vepstas wrote First my answers to Antonio: 1) What is the role of Digital Evolution (and ALife) in the AGI context? The nearest I can come up with is Goertzel's virtual pre-school idea, where the environment is given and the proto-AGI learns within it. It's certainly possible to place

Re: [agi] How do we hear music

2010-07-22 Thread Jan Klauck
Mike Tintner trolled And maths will handle the examples given : same tunes - different scales, different instruments same face - cartoon, photo same logo - different parts [buildings/ fruits/ human figures] Unfortunately I forgot. The answer is somewhere down there:

Re: [agi] The Collective Brain

2010-07-21 Thread Jan Klauck
Mike Tintner wrote You partly illustrate my point - you talk of artificial brains as if they actually exist That's the magic of thinking in scenarios. For you it may appear as if we couldn't differentiate between reality and a thought experiment. By implicitly pretending that artificial

Re: [agi] Seeking Is-a Functionality

2010-07-20 Thread Jan Klauck
Steve Richfield wrote maybe with percentages attached, so that people could announce that, say, I am 31% of the way to having an AGI. Not useful. AGI is still a hypothetical state and its true composition remains unknown. At best you can measure how much of an AGI plan is completed, but

Re: [agi] The Collective Brain

2010-07-20 Thread Jan Klauck
Mike Tintner wrote No, the collective brain is actually a somewhat distinctive idea. Just a way of looking at social support networks. Even social philosophers centuries ago had similar ideas--they were lacking our technical understanding and used analogies from biology (organicism) instead.

[agi] Mathematical models of autonomous life

2008-11-03 Thread Jan Klauck
Researchers from the German Max Planck Society claim to have developed mathematical methods that allow (virtual and robotic) embodied entities to evolve by their own. They begin with a child-like state and develop by exploring both their environment and their personal capabilities. Well, not very

Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-19 Thread Jan Klauck
Matt, People who haven't studied logic or its notation can certainly learn to do this type of reasoning. Formal logic doesn't scale up very well in humans. That's why this kind of reasoning is so unpopular. Our capacities are that small and we connect to other human entities for a kind of

Re: [agi] any advice

2008-09-09 Thread Jan Klauck
Dalle Molle Institute of Artificial Intelligence University of Verona (Artificial Intelligence dept) If they were corporations, from which one would you buy shares? I would go for IDSIA. I mean, hey, you have Schmidhuber around. :) Jan --- agi